Test Report: KVM_Linux_crio 19355

                    
                      6d23947514fd7a389789fed180382829b6444229:2024-08-02:35618
                    
                

Test fail (33/322)

Order failed test Duration
43 TestAddons/parallel/Ingress 154.2
45 TestAddons/parallel/MetricsServer 366.4
54 TestAddons/StoppedEnableDisable 154.24
173 TestMultiControlPlane/serial/StopSecondaryNode 141.77
175 TestMultiControlPlane/serial/RestartSecondaryNode 50.41
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 407.91
180 TestMultiControlPlane/serial/StopCluster 141.46
240 TestMultiNode/serial/RestartKeepsNodes 334.78
242 TestMultiNode/serial/StopMultiNode 141.27
249 TestPreload 192.99
257 TestKubernetesUpgrade 729.32
280 TestPause/serial/SecondStartNoReconfiguration 77.78
294 TestStartStop/group/old-k8s-version/serial/FirstStart 283.08
301 TestStartStop/group/no-preload/serial/Stop 139.14
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.06
305 TestStartStop/group/old-k8s-version/serial/DeployApp 0.46
306 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 96.35
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
308 TestStartStop/group/no-preload/serial/SecondStart 361.3
311 TestStartStop/group/old-k8s-version/serial/SecondStart 767.09
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 541.31
330 TestStartStop/group/embed-certs/serial/Stop 139.13
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.06
332 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
334 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.45
335 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 541.44
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 466.23
337 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.16
338 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 111.48
340 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 2.12
341 TestStartStop/group/no-preload/serial/Pause 4.3
386 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 376.43
x
+
TestAddons/parallel/Ingress (154.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-892214 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-892214 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-892214 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fc59e354-2e50-4658-9768-c1a886aff1aa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fc59e354-2e50-4658-9768-c1a886aff1aa] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004128232s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-892214 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.575122975s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-892214 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.4
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-892214 addons disable ingress-dns --alsologtostderr -v=1: (1.095197307s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-892214 addons disable ingress --alsologtostderr -v=1: (7.675376557s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-892214 -n addons-892214
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-892214 logs -n 25: (1.115673157s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-380260                                                                     | download-only-380260 | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC | 02 Aug 24 17:27 UTC |
	| delete  | -p download-only-399295                                                                     | download-only-399295 | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC | 02 Aug 24 17:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-711292 | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC |                     |
	|         | binary-mirror-711292                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42613                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-711292                                                                     | binary-mirror-711292 | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC | 02 Aug 24 17:27 UTC |
	| addons  | enable dashboard -p                                                                         | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC |                     |
	|         | addons-892214                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC |                     |
	|         | addons-892214                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-892214 --wait=true                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC | 02 Aug 24 17:30 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:30 UTC | 02 Aug 24 17:30 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-892214 ssh cat                                                                       | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:30 UTC | 02 Aug 24 17:30 UTC |
	|         | /opt/local-path-provisioner/pvc-a1b79ae1-93e6-47b1-8e06-9a59fcccfc8d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:30 UTC | 02 Aug 24 17:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-892214 ip                                                                            | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | addons-892214                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-892214 ssh curl -s                                                                   | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | -p addons-892214                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | addons-892214                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | -p addons-892214                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892214 addons                                                                        | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892214 addons                                                                        | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:32 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-892214 ip                                                                            | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:33 UTC | 02 Aug 24 17:33 UTC |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:33 UTC | 02 Aug 24 17:33 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:33 UTC | 02 Aug 24 17:33 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:27:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:27:50.976202   13693 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:27:50.976308   13693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:27:50.976316   13693 out.go:304] Setting ErrFile to fd 2...
	I0802 17:27:50.976321   13693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:27:50.976506   13693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:27:50.977099   13693 out.go:298] Setting JSON to false
	I0802 17:27:50.977860   13693 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":615,"bootTime":1722619056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:27:50.977913   13693 start.go:139] virtualization: kvm guest
	I0802 17:27:50.979963   13693 out.go:177] * [addons-892214] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:27:50.981185   13693 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 17:27:50.981207   13693 notify.go:220] Checking for updates...
	I0802 17:27:50.983312   13693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:27:50.984457   13693 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:27:50.985530   13693 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:27:50.986544   13693 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 17:27:50.987742   13693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 17:27:50.989084   13693 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:27:51.020982   13693 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 17:27:51.022136   13693 start.go:297] selected driver: kvm2
	I0802 17:27:51.022149   13693 start.go:901] validating driver "kvm2" against <nil>
	I0802 17:27:51.022160   13693 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 17:27:51.022853   13693 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:27:51.022943   13693 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:27:51.037250   13693 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:27:51.037316   13693 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 17:27:51.037623   13693 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:27:51.037661   13693 cni.go:84] Creating CNI manager for ""
	I0802 17:27:51.037672   13693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 17:27:51.037681   13693 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 17:27:51.037757   13693 start.go:340] cluster config:
	{Name:addons-892214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-892214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:27:51.037880   13693 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:27:51.040461   13693 out.go:177] * Starting "addons-892214" primary control-plane node in "addons-892214" cluster
	I0802 17:27:51.041495   13693 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:27:51.041523   13693 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 17:27:51.041532   13693 cache.go:56] Caching tarball of preloaded images
	I0802 17:27:51.041603   13693 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 17:27:51.041616   13693 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 17:27:51.041903   13693 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/config.json ...
	I0802 17:27:51.041924   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/config.json: {Name:mkec90184a2a49bfc6d18b2bafcf782d87496a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:51.042063   13693 start.go:360] acquireMachinesLock for addons-892214: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 17:27:51.042113   13693 start.go:364] duration metric: took 34.882µs to acquireMachinesLock for "addons-892214"
	I0802 17:27:51.042131   13693 start.go:93] Provisioning new machine with config: &{Name:addons-892214 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-892214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:27:51.042183   13693 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 17:27:51.043720   13693 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0802 17:27:51.043838   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:27:51.043877   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:51.057654   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0802 17:27:51.058071   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:51.058588   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:27:51.058612   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:51.058933   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:51.059121   13693 main.go:141] libmachine: (addons-892214) Calling .GetMachineName
	I0802 17:27:51.059264   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:27:51.059410   13693 start.go:159] libmachine.API.Create for "addons-892214" (driver="kvm2")
	I0802 17:27:51.059441   13693 client.go:168] LocalClient.Create starting
	I0802 17:27:51.059487   13693 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 17:27:51.210796   13693 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 17:27:51.290041   13693 main.go:141] libmachine: Running pre-create checks...
	I0802 17:27:51.290064   13693 main.go:141] libmachine: (addons-892214) Calling .PreCreateCheck
	I0802 17:27:51.290548   13693 main.go:141] libmachine: (addons-892214) Calling .GetConfigRaw
	I0802 17:27:51.290977   13693 main.go:141] libmachine: Creating machine...
	I0802 17:27:51.290991   13693 main.go:141] libmachine: (addons-892214) Calling .Create
	I0802 17:27:51.291142   13693 main.go:141] libmachine: (addons-892214) Creating KVM machine...
	I0802 17:27:51.292350   13693 main.go:141] libmachine: (addons-892214) DBG | found existing default KVM network
	I0802 17:27:51.293058   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:51.292936   13717 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0802 17:27:51.293084   13693 main.go:141] libmachine: (addons-892214) DBG | created network xml: 
	I0802 17:27:51.293097   13693 main.go:141] libmachine: (addons-892214) DBG | <network>
	I0802 17:27:51.293105   13693 main.go:141] libmachine: (addons-892214) DBG |   <name>mk-addons-892214</name>
	I0802 17:27:51.293110   13693 main.go:141] libmachine: (addons-892214) DBG |   <dns enable='no'/>
	I0802 17:27:51.293118   13693 main.go:141] libmachine: (addons-892214) DBG |   
	I0802 17:27:51.293124   13693 main.go:141] libmachine: (addons-892214) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0802 17:27:51.293130   13693 main.go:141] libmachine: (addons-892214) DBG |     <dhcp>
	I0802 17:27:51.293136   13693 main.go:141] libmachine: (addons-892214) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0802 17:27:51.293143   13693 main.go:141] libmachine: (addons-892214) DBG |     </dhcp>
	I0802 17:27:51.293147   13693 main.go:141] libmachine: (addons-892214) DBG |   </ip>
	I0802 17:27:51.293152   13693 main.go:141] libmachine: (addons-892214) DBG |   
	I0802 17:27:51.293156   13693 main.go:141] libmachine: (addons-892214) DBG | </network>
	I0802 17:27:51.293162   13693 main.go:141] libmachine: (addons-892214) DBG | 
	I0802 17:27:51.298606   13693 main.go:141] libmachine: (addons-892214) DBG | trying to create private KVM network mk-addons-892214 192.168.39.0/24...
	I0802 17:27:51.359557   13693 main.go:141] libmachine: (addons-892214) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214 ...
	I0802 17:27:51.359591   13693 main.go:141] libmachine: (addons-892214) DBG | private KVM network mk-addons-892214 192.168.39.0/24 created
	I0802 17:27:51.359611   13693 main.go:141] libmachine: (addons-892214) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 17:27:51.359633   13693 main.go:141] libmachine: (addons-892214) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 17:27:51.359666   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:51.359418   13717 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:27:51.622976   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:51.622868   13717 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa...
	I0802 17:27:51.675282   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:51.675158   13717 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/addons-892214.rawdisk...
	I0802 17:27:51.675303   13693 main.go:141] libmachine: (addons-892214) DBG | Writing magic tar header
	I0802 17:27:51.675313   13693 main.go:141] libmachine: (addons-892214) DBG | Writing SSH key tar header
	I0802 17:27:51.675320   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:51.675287   13717 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214 ...
	I0802 17:27:51.675396   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214
	I0802 17:27:51.675419   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214 (perms=drwx------)
	I0802 17:27:51.675430   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 17:27:51.675441   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 17:27:51.675459   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 17:27:51.675475   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 17:27:51.675490   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 17:27:51.675513   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 17:27:51.675531   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:27:51.675542   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 17:27:51.675549   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 17:27:51.675556   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins
	I0802 17:27:51.675563   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home
	I0802 17:27:51.675569   13693 main.go:141] libmachine: (addons-892214) DBG | Skipping /home - not owner
	I0802 17:27:51.675610   13693 main.go:141] libmachine: (addons-892214) Creating domain...
	I0802 17:27:51.676444   13693 main.go:141] libmachine: (addons-892214) define libvirt domain using xml: 
	I0802 17:27:51.676468   13693 main.go:141] libmachine: (addons-892214) <domain type='kvm'>
	I0802 17:27:51.676488   13693 main.go:141] libmachine: (addons-892214)   <name>addons-892214</name>
	I0802 17:27:51.676504   13693 main.go:141] libmachine: (addons-892214)   <memory unit='MiB'>4000</memory>
	I0802 17:27:51.676517   13693 main.go:141] libmachine: (addons-892214)   <vcpu>2</vcpu>
	I0802 17:27:51.676524   13693 main.go:141] libmachine: (addons-892214)   <features>
	I0802 17:27:51.676532   13693 main.go:141] libmachine: (addons-892214)     <acpi/>
	I0802 17:27:51.676538   13693 main.go:141] libmachine: (addons-892214)     <apic/>
	I0802 17:27:51.676543   13693 main.go:141] libmachine: (addons-892214)     <pae/>
	I0802 17:27:51.676550   13693 main.go:141] libmachine: (addons-892214)     
	I0802 17:27:51.676555   13693 main.go:141] libmachine: (addons-892214)   </features>
	I0802 17:27:51.676565   13693 main.go:141] libmachine: (addons-892214)   <cpu mode='host-passthrough'>
	I0802 17:27:51.676576   13693 main.go:141] libmachine: (addons-892214)   
	I0802 17:27:51.676590   13693 main.go:141] libmachine: (addons-892214)   </cpu>
	I0802 17:27:51.676609   13693 main.go:141] libmachine: (addons-892214)   <os>
	I0802 17:27:51.676619   13693 main.go:141] libmachine: (addons-892214)     <type>hvm</type>
	I0802 17:27:51.676627   13693 main.go:141] libmachine: (addons-892214)     <boot dev='cdrom'/>
	I0802 17:27:51.676632   13693 main.go:141] libmachine: (addons-892214)     <boot dev='hd'/>
	I0802 17:27:51.676640   13693 main.go:141] libmachine: (addons-892214)     <bootmenu enable='no'/>
	I0802 17:27:51.676646   13693 main.go:141] libmachine: (addons-892214)   </os>
	I0802 17:27:51.676658   13693 main.go:141] libmachine: (addons-892214)   <devices>
	I0802 17:27:51.676672   13693 main.go:141] libmachine: (addons-892214)     <disk type='file' device='cdrom'>
	I0802 17:27:51.676694   13693 main.go:141] libmachine: (addons-892214)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/boot2docker.iso'/>
	I0802 17:27:51.676705   13693 main.go:141] libmachine: (addons-892214)       <target dev='hdc' bus='scsi'/>
	I0802 17:27:51.676716   13693 main.go:141] libmachine: (addons-892214)       <readonly/>
	I0802 17:27:51.676724   13693 main.go:141] libmachine: (addons-892214)     </disk>
	I0802 17:27:51.676733   13693 main.go:141] libmachine: (addons-892214)     <disk type='file' device='disk'>
	I0802 17:27:51.676745   13693 main.go:141] libmachine: (addons-892214)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 17:27:51.676760   13693 main.go:141] libmachine: (addons-892214)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/addons-892214.rawdisk'/>
	I0802 17:27:51.676773   13693 main.go:141] libmachine: (addons-892214)       <target dev='hda' bus='virtio'/>
	I0802 17:27:51.676782   13693 main.go:141] libmachine: (addons-892214)     </disk>
	I0802 17:27:51.676794   13693 main.go:141] libmachine: (addons-892214)     <interface type='network'>
	I0802 17:27:51.676805   13693 main.go:141] libmachine: (addons-892214)       <source network='mk-addons-892214'/>
	I0802 17:27:51.676814   13693 main.go:141] libmachine: (addons-892214)       <model type='virtio'/>
	I0802 17:27:51.676820   13693 main.go:141] libmachine: (addons-892214)     </interface>
	I0802 17:27:51.676826   13693 main.go:141] libmachine: (addons-892214)     <interface type='network'>
	I0802 17:27:51.676832   13693 main.go:141] libmachine: (addons-892214)       <source network='default'/>
	I0802 17:27:51.676839   13693 main.go:141] libmachine: (addons-892214)       <model type='virtio'/>
	I0802 17:27:51.676849   13693 main.go:141] libmachine: (addons-892214)     </interface>
	I0802 17:27:51.676857   13693 main.go:141] libmachine: (addons-892214)     <serial type='pty'>
	I0802 17:27:51.676862   13693 main.go:141] libmachine: (addons-892214)       <target port='0'/>
	I0802 17:27:51.676868   13693 main.go:141] libmachine: (addons-892214)     </serial>
	I0802 17:27:51.676874   13693 main.go:141] libmachine: (addons-892214)     <console type='pty'>
	I0802 17:27:51.676883   13693 main.go:141] libmachine: (addons-892214)       <target type='serial' port='0'/>
	I0802 17:27:51.676889   13693 main.go:141] libmachine: (addons-892214)     </console>
	I0802 17:27:51.676898   13693 main.go:141] libmachine: (addons-892214)     <rng model='virtio'>
	I0802 17:27:51.676905   13693 main.go:141] libmachine: (addons-892214)       <backend model='random'>/dev/random</backend>
	I0802 17:27:51.676910   13693 main.go:141] libmachine: (addons-892214)     </rng>
	I0802 17:27:51.676916   13693 main.go:141] libmachine: (addons-892214)     
	I0802 17:27:51.676921   13693 main.go:141] libmachine: (addons-892214)     
	I0802 17:27:51.676931   13693 main.go:141] libmachine: (addons-892214)   </devices>
	I0802 17:27:51.676939   13693 main.go:141] libmachine: (addons-892214) </domain>
	I0802 17:27:51.676948   13693 main.go:141] libmachine: (addons-892214) 
	I0802 17:27:51.682787   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:b4:62:db in network default
	I0802 17:27:51.683312   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:51.683376   13693 main.go:141] libmachine: (addons-892214) Ensuring networks are active...
	I0802 17:27:51.683853   13693 main.go:141] libmachine: (addons-892214) Ensuring network default is active
	I0802 17:27:51.684100   13693 main.go:141] libmachine: (addons-892214) Ensuring network mk-addons-892214 is active
	I0802 17:27:51.684697   13693 main.go:141] libmachine: (addons-892214) Getting domain xml...
	I0802 17:27:51.685222   13693 main.go:141] libmachine: (addons-892214) Creating domain...
	I0802 17:27:53.057283   13693 main.go:141] libmachine: (addons-892214) Waiting to get IP...
	I0802 17:27:53.058078   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:53.058437   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:53.058462   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:53.058423   13717 retry.go:31] will retry after 253.172901ms: waiting for machine to come up
	I0802 17:27:53.312747   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:53.313205   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:53.313228   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:53.313153   13717 retry.go:31] will retry after 330.782601ms: waiting for machine to come up
	I0802 17:27:53.645740   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:53.646084   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:53.646150   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:53.646096   13717 retry.go:31] will retry after 324.585239ms: waiting for machine to come up
	I0802 17:27:53.972530   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:53.973026   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:53.973094   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:53.972981   13717 retry.go:31] will retry after 430.438542ms: waiting for machine to come up
	I0802 17:27:54.404565   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:54.404999   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:54.405034   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:54.404956   13717 retry.go:31] will retry after 479.7052ms: waiting for machine to come up
	I0802 17:27:54.886623   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:54.887000   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:54.887029   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:54.886952   13717 retry.go:31] will retry after 689.858544ms: waiting for machine to come up
	I0802 17:27:55.578832   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:55.579152   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:55.579176   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:55.579132   13717 retry.go:31] will retry after 893.166889ms: waiting for machine to come up
	I0802 17:27:56.473790   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:56.474224   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:56.474249   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:56.474184   13717 retry.go:31] will retry after 1.160354236s: waiting for machine to come up
	I0802 17:27:57.636582   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:57.636997   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:57.637029   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:57.636947   13717 retry.go:31] will retry after 1.777622896s: waiting for machine to come up
	I0802 17:27:59.416754   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:59.417056   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:59.417077   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:59.417028   13717 retry.go:31] will retry after 1.803146036s: waiting for machine to come up
	I0802 17:28:01.221891   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:01.222284   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:28:01.222314   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:28:01.222220   13717 retry.go:31] will retry after 2.502803711s: waiting for machine to come up
	I0802 17:28:03.727863   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:03.728196   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:28:03.728220   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:28:03.728138   13717 retry.go:31] will retry after 2.760974284s: waiting for machine to come up
	I0802 17:28:06.490248   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:06.490596   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:28:06.490620   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:28:06.490547   13717 retry.go:31] will retry after 2.805071087s: waiting for machine to come up
	I0802 17:28:09.299439   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:09.299759   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:28:09.299788   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:28:09.299713   13717 retry.go:31] will retry after 5.09623066s: waiting for machine to come up
	I0802 17:28:14.399714   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.400052   13693 main.go:141] libmachine: (addons-892214) Found IP for machine: 192.168.39.4
	I0802 17:28:14.400081   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has current primary IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.400091   13693 main.go:141] libmachine: (addons-892214) Reserving static IP address...
	I0802 17:28:14.400355   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find host DHCP lease matching {name: "addons-892214", mac: "52:54:00:00:90:54", ip: "192.168.39.4"} in network mk-addons-892214
	I0802 17:28:14.468033   13693 main.go:141] libmachine: (addons-892214) DBG | Getting to WaitForSSH function...
	I0802 17:28:14.468059   13693 main.go:141] libmachine: (addons-892214) Reserved static IP address: 192.168.39.4
	I0802 17:28:14.468072   13693 main.go:141] libmachine: (addons-892214) Waiting for SSH to be available...
	I0802 17:28:14.470508   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.471044   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:90:54}
	I0802 17:28:14.471064   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.471327   13693 main.go:141] libmachine: (addons-892214) DBG | Using SSH client type: external
	I0802 17:28:14.471341   13693 main.go:141] libmachine: (addons-892214) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa (-rw-------)
	I0802 17:28:14.471356   13693 main.go:141] libmachine: (addons-892214) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 17:28:14.471364   13693 main.go:141] libmachine: (addons-892214) DBG | About to run SSH command:
	I0802 17:28:14.471373   13693 main.go:141] libmachine: (addons-892214) DBG | exit 0
	I0802 17:28:14.603145   13693 main.go:141] libmachine: (addons-892214) DBG | SSH cmd err, output: <nil>: 
	I0802 17:28:14.603397   13693 main.go:141] libmachine: (addons-892214) KVM machine creation complete!
	I0802 17:28:14.603720   13693 main.go:141] libmachine: (addons-892214) Calling .GetConfigRaw
	I0802 17:28:14.604182   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:14.604372   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:14.604534   13693 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 17:28:14.604556   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:14.605815   13693 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 17:28:14.605832   13693 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 17:28:14.605840   13693 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 17:28:14.605847   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:14.608094   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.608410   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:14.608435   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.608536   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:14.608689   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.608840   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.608934   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:14.609089   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:14.609275   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:14.609286   13693 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 17:28:14.710218   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:28:14.710242   13693 main.go:141] libmachine: Detecting the provisioner...
	I0802 17:28:14.710252   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:14.712634   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.712891   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:14.712916   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.713010   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:14.713157   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.713282   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.713404   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:14.713548   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:14.713703   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:14.713713   13693 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 17:28:14.811425   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 17:28:14.811491   13693 main.go:141] libmachine: found compatible host: buildroot
	I0802 17:28:14.811498   13693 main.go:141] libmachine: Provisioning with buildroot...
	I0802 17:28:14.811505   13693 main.go:141] libmachine: (addons-892214) Calling .GetMachineName
	I0802 17:28:14.811751   13693 buildroot.go:166] provisioning hostname "addons-892214"
	I0802 17:28:14.811781   13693 main.go:141] libmachine: (addons-892214) Calling .GetMachineName
	I0802 17:28:14.812098   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:14.814230   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.814571   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:14.814602   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.814753   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:14.814953   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.815232   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.815375   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:14.815569   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:14.815771   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:14.815788   13693 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-892214 && echo "addons-892214" | sudo tee /etc/hostname
	I0802 17:28:14.928650   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-892214
	
	I0802 17:28:14.928674   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:14.931179   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.931566   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:14.931593   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.931776   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:14.931975   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.932140   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.932270   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:14.932399   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:14.932548   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:14.932564   13693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-892214' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-892214/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-892214' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 17:28:15.039049   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:28:15.039074   13693 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 17:28:15.039128   13693 buildroot.go:174] setting up certificates
	I0802 17:28:15.039146   13693 provision.go:84] configureAuth start
	I0802 17:28:15.039161   13693 main.go:141] libmachine: (addons-892214) Calling .GetMachineName
	I0802 17:28:15.039405   13693 main.go:141] libmachine: (addons-892214) Calling .GetIP
	I0802 17:28:15.041864   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.042167   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.042188   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.042314   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.044301   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.044641   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.044664   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.044772   13693 provision.go:143] copyHostCerts
	I0802 17:28:15.044852   13693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 17:28:15.044986   13693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 17:28:15.045117   13693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 17:28:15.045210   13693 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.addons-892214 san=[127.0.0.1 192.168.39.4 addons-892214 localhost minikube]
	I0802 17:28:15.276127   13693 provision.go:177] copyRemoteCerts
	I0802 17:28:15.276189   13693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 17:28:15.276210   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.278638   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.278875   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.278918   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.279091   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.279302   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.279464   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.279629   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:15.360956   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0802 17:28:15.382411   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 17:28:15.403516   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 17:28:15.424081   13693 provision.go:87] duration metric: took 384.916003ms to configureAuth
	I0802 17:28:15.424106   13693 buildroot.go:189] setting minikube options for container-runtime
	I0802 17:28:15.424295   13693 config.go:182] Loaded profile config "addons-892214": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:28:15.424381   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.426788   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.427143   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.427169   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.427306   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.427506   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.427681   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.427793   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.427967   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:15.428113   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:15.428134   13693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 17:28:15.680103   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 17:28:15.680128   13693 main.go:141] libmachine: Checking connection to Docker...
	I0802 17:28:15.680139   13693 main.go:141] libmachine: (addons-892214) Calling .GetURL
	I0802 17:28:15.681365   13693 main.go:141] libmachine: (addons-892214) DBG | Using libvirt version 6000000
	I0802 17:28:15.683436   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.683797   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.683826   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.683980   13693 main.go:141] libmachine: Docker is up and running!
	I0802 17:28:15.683992   13693 main.go:141] libmachine: Reticulating splines...
	I0802 17:28:15.683999   13693 client.go:171] duration metric: took 24.624550565s to LocalClient.Create
	I0802 17:28:15.684019   13693 start.go:167] duration metric: took 24.624611357s to libmachine.API.Create "addons-892214"
	I0802 17:28:15.684029   13693 start.go:293] postStartSetup for "addons-892214" (driver="kvm2")
	I0802 17:28:15.684048   13693 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 17:28:15.684064   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:15.684287   13693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 17:28:15.684309   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.686178   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.686471   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.686500   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.686623   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.686789   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.686926   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.687062   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:15.764883   13693 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 17:28:15.768763   13693 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 17:28:15.768789   13693 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 17:28:15.768867   13693 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 17:28:15.768892   13693 start.go:296] duration metric: took 84.849247ms for postStartSetup
	I0802 17:28:15.768925   13693 main.go:141] libmachine: (addons-892214) Calling .GetConfigRaw
	I0802 17:28:15.769446   13693 main.go:141] libmachine: (addons-892214) Calling .GetIP
	I0802 17:28:15.771664   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.771936   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.771967   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.772135   13693 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/config.json ...
	I0802 17:28:15.772344   13693 start.go:128] duration metric: took 24.730151057s to createHost
	I0802 17:28:15.772383   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.774276   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.774570   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.774599   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.774743   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.774898   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.775065   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.775226   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.775368   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:15.775528   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:15.775537   13693 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 17:28:15.875435   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722619695.853672833
	
	I0802 17:28:15.875457   13693 fix.go:216] guest clock: 1722619695.853672833
	I0802 17:28:15.875465   13693 fix.go:229] Guest: 2024-08-02 17:28:15.853672833 +0000 UTC Remote: 2024-08-02 17:28:15.772370386 +0000 UTC m=+24.827229333 (delta=81.302447ms)
	I0802 17:28:15.875498   13693 fix.go:200] guest clock delta is within tolerance: 81.302447ms
	I0802 17:28:15.875505   13693 start.go:83] releasing machines lock for "addons-892214", held for 24.833381788s
	I0802 17:28:15.875532   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:15.875801   13693 main.go:141] libmachine: (addons-892214) Calling .GetIP
	I0802 17:28:15.878641   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.879541   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.879571   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.879699   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:15.880133   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:15.880300   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:15.880362   13693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 17:28:15.880408   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.880511   13693 ssh_runner.go:195] Run: cat /version.json
	I0802 17:28:15.880533   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.883178   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.883203   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.883465   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.883491   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.883518   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.883536   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.883576   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.883770   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.883778   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.883950   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.883958   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.884110   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:15.884130   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.884269   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:16.004546   13693 ssh_runner.go:195] Run: systemctl --version
	I0802 17:28:16.010175   13693 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 17:28:16.173856   13693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 17:28:16.179031   13693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 17:28:16.179121   13693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 17:28:16.193265   13693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 17:28:16.193288   13693 start.go:495] detecting cgroup driver to use...
	I0802 17:28:16.193349   13693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 17:28:16.210050   13693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:28:16.221927   13693 docker.go:217] disabling cri-docker service (if available) ...
	I0802 17:28:16.221979   13693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 17:28:16.234221   13693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 17:28:16.246494   13693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 17:28:16.356545   13693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 17:28:16.496079   13693 docker.go:233] disabling docker service ...
	I0802 17:28:16.496160   13693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 17:28:16.509227   13693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 17:28:16.521654   13693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 17:28:16.652010   13693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 17:28:16.761180   13693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 17:28:16.773772   13693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:28:16.791168   13693 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 17:28:16.791230   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.800743   13693 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 17:28:16.800829   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.810231   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.819535   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.828995   13693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 17:28:16.838607   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.848046   13693 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.863491   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.872711   13693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 17:28:16.881407   13693 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 17:28:16.881451   13693 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 17:28:16.892601   13693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 17:28:16.901360   13693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:28:17.003241   13693 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 17:28:17.132080   13693 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 17:28:17.132177   13693 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 17:28:17.136186   13693 start.go:563] Will wait 60s for crictl version
	I0802 17:28:17.136248   13693 ssh_runner.go:195] Run: which crictl
	I0802 17:28:17.139421   13693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 17:28:17.174482   13693 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 17:28:17.174606   13693 ssh_runner.go:195] Run: crio --version
	I0802 17:28:17.200899   13693 ssh_runner.go:195] Run: crio --version
	I0802 17:28:17.228286   13693 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 17:28:17.229513   13693 main.go:141] libmachine: (addons-892214) Calling .GetIP
	I0802 17:28:17.232106   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:17.232416   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:17.232447   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:17.232635   13693 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 17:28:17.236413   13693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:28:17.247890   13693 kubeadm.go:883] updating cluster {Name:addons-892214 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-892214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 17:28:17.248027   13693 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:28:17.248091   13693 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:28:17.278350   13693 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 17:28:17.278422   13693 ssh_runner.go:195] Run: which lz4
	I0802 17:28:17.281995   13693 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 17:28:17.285816   13693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 17:28:17.285846   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 17:28:18.400521   13693 crio.go:462] duration metric: took 1.118561309s to copy over tarball
	I0802 17:28:18.400588   13693 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 17:28:20.618059   13693 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.217439698s)
	I0802 17:28:20.618091   13693 crio.go:469] duration metric: took 2.217545642s to extract the tarball
	I0802 17:28:20.618098   13693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 17:28:20.654721   13693 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:28:20.693160   13693 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 17:28:20.693181   13693 cache_images.go:84] Images are preloaded, skipping loading
	I0802 17:28:20.693188   13693 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.30.3 crio true true} ...
	I0802 17:28:20.693281   13693 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-892214 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-892214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 17:28:20.693344   13693 ssh_runner.go:195] Run: crio config
	I0802 17:28:20.738842   13693 cni.go:84] Creating CNI manager for ""
	I0802 17:28:20.738866   13693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 17:28:20.738878   13693 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 17:28:20.738917   13693 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-892214 NodeName:addons-892214 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 17:28:20.739058   13693 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-892214"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 17:28:20.739127   13693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 17:28:20.748102   13693 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 17:28:20.748170   13693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 17:28:20.756804   13693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0802 17:28:20.772147   13693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 17:28:20.789966   13693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0802 17:28:20.807986   13693 ssh_runner.go:195] Run: grep 192.168.39.4	control-plane.minikube.internal$ /etc/hosts
	I0802 17:28:20.811612   13693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:28:20.822697   13693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:28:20.966427   13693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:28:20.983220   13693 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214 for IP: 192.168.39.4
	I0802 17:28:20.983239   13693 certs.go:194] generating shared ca certs ...
	I0802 17:28:20.983259   13693 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:20.983400   13693 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 17:28:21.056606   13693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt ...
	I0802 17:28:21.056633   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt: {Name:mk7f2c81f05a97dea4ed48c16c19f59235c98d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.056811   13693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key ...
	I0802 17:28:21.056827   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key: {Name:mk3b486491520ba40a02b021ce755433ce8d0de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.056923   13693 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 17:28:21.286187   13693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt ...
	I0802 17:28:21.286224   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt: {Name:mkb20879e4d0347acb03a2cb528decfd19f1525d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.286438   13693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key ...
	I0802 17:28:21.286456   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key: {Name:mk5d8ccb4c0b21bba1534a9aa4c7e6d10b5e11e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.286567   13693 certs.go:256] generating profile certs ...
	I0802 17:28:21.286641   13693 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.key
	I0802 17:28:21.286668   13693 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt with IP's: []
	I0802 17:28:21.372555   13693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt ...
	I0802 17:28:21.372593   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: {Name:mk9153b58d05737bb3729486a9de5259d8b40218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.372787   13693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.key ...
	I0802 17:28:21.372803   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.key: {Name:mka8c048727605d8e0f9e1df6d4be86275965409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.372910   13693 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key.a94bcccf
	I0802 17:28:21.372945   13693 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt.a94bcccf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4]
	I0802 17:28:21.767436   13693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt.a94bcccf ...
	I0802 17:28:21.767470   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt.a94bcccf: {Name:mk4d86360581729e088f3e659727b5d1fbd4296f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.767632   13693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key.a94bcccf ...
	I0802 17:28:21.767646   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key.a94bcccf: {Name:mk7653b5966696f07abb50c8f3ffb9a775b79ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.767716   13693 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt.a94bcccf -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt
	I0802 17:28:21.767790   13693 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key.a94bcccf -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key
	I0802 17:28:21.767834   13693 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.key
	I0802 17:28:21.767852   13693 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.crt with IP's: []
	I0802 17:28:21.934875   13693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.crt ...
	I0802 17:28:21.934908   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.crt: {Name:mk8ac06a3eac335a09fee7d690e1936ed369e3dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.935067   13693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.key ...
	I0802 17:28:21.935077   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.key: {Name:mk4aadd41b1e90e47581c1d4d731e1d3b3bf970f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.935275   13693 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 17:28:21.935308   13693 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 17:28:21.935344   13693 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 17:28:21.935367   13693 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 17:28:21.935900   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 17:28:21.958757   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 17:28:21.980135   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 17:28:22.000723   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 17:28:22.021394   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0802 17:28:22.042084   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 17:28:22.062828   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 17:28:22.086261   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 17:28:22.109090   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 17:28:22.131632   13693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 17:28:22.146735   13693 ssh_runner.go:195] Run: openssl version
	I0802 17:28:22.152267   13693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 17:28:22.161654   13693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:28:22.165510   13693 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:28:22.165572   13693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:28:22.170949   13693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 17:28:22.180346   13693 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 17:28:22.183818   13693 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 17:28:22.183870   13693 kubeadm.go:392] StartCluster: {Name:addons-892214 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-892214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:28:22.183979   13693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 17:28:22.184020   13693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 17:28:22.217533   13693 cri.go:89] found id: ""
	I0802 17:28:22.217627   13693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 17:28:22.226797   13693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 17:28:22.235428   13693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 17:28:22.243746   13693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 17:28:22.243768   13693 kubeadm.go:157] found existing configuration files:
	
	I0802 17:28:22.243818   13693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 17:28:22.251773   13693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 17:28:22.251830   13693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 17:28:22.260184   13693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 17:28:22.268018   13693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 17:28:22.268065   13693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 17:28:22.276195   13693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 17:28:22.283962   13693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 17:28:22.284016   13693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 17:28:22.292229   13693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 17:28:22.299998   13693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 17:28:22.300043   13693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 17:28:22.308216   13693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 17:28:22.504913   13693 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 17:28:32.343249   13693 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 17:28:32.343323   13693 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 17:28:32.343406   13693 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 17:28:32.343583   13693 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 17:28:32.343713   13693 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 17:28:32.343804   13693 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 17:28:32.345280   13693 out.go:204]   - Generating certificates and keys ...
	I0802 17:28:32.345391   13693 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 17:28:32.345467   13693 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 17:28:32.345559   13693 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 17:28:32.345647   13693 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 17:28:32.345714   13693 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 17:28:32.345771   13693 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 17:28:32.345825   13693 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 17:28:32.345935   13693 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-892214 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0802 17:28:32.346021   13693 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 17:28:32.346156   13693 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-892214 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0802 17:28:32.346254   13693 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 17:28:32.346354   13693 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 17:28:32.346423   13693 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 17:28:32.346505   13693 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 17:28:32.346588   13693 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 17:28:32.346680   13693 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 17:28:32.346730   13693 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 17:28:32.346787   13693 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 17:28:32.346832   13693 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 17:28:32.346898   13693 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 17:28:32.346955   13693 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 17:28:32.348284   13693 out.go:204]   - Booting up control plane ...
	I0802 17:28:32.348363   13693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 17:28:32.348439   13693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 17:28:32.348495   13693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 17:28:32.348583   13693 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 17:28:32.348691   13693 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 17:28:32.348733   13693 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 17:28:32.348840   13693 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 17:28:32.348903   13693 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 17:28:32.348957   13693 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.272826ms
	I0802 17:28:32.349045   13693 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 17:28:32.349127   13693 kubeadm.go:310] [api-check] The API server is healthy after 5.0012416s
	I0802 17:28:32.349253   13693 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 17:28:32.349396   13693 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 17:28:32.349452   13693 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 17:28:32.349620   13693 kubeadm.go:310] [mark-control-plane] Marking the node addons-892214 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 17:28:32.349717   13693 kubeadm.go:310] [bootstrap-token] Using token: zy0nf3.h41pvfnv7qqy1skc
	I0802 17:28:32.350922   13693 out.go:204]   - Configuring RBAC rules ...
	I0802 17:28:32.351024   13693 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 17:28:32.351155   13693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 17:28:32.351313   13693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 17:28:32.351459   13693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 17:28:32.351623   13693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 17:28:32.351739   13693 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 17:28:32.351862   13693 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 17:28:32.351929   13693 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 17:28:32.351987   13693 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 17:28:32.351996   13693 kubeadm.go:310] 
	I0802 17:28:32.352080   13693 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 17:28:32.352091   13693 kubeadm.go:310] 
	I0802 17:28:32.352170   13693 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 17:28:32.352176   13693 kubeadm.go:310] 
	I0802 17:28:32.352203   13693 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 17:28:32.352252   13693 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 17:28:32.352303   13693 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 17:28:32.352309   13693 kubeadm.go:310] 
	I0802 17:28:32.352352   13693 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 17:28:32.352357   13693 kubeadm.go:310] 
	I0802 17:28:32.352431   13693 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 17:28:32.352445   13693 kubeadm.go:310] 
	I0802 17:28:32.352530   13693 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 17:28:32.352597   13693 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 17:28:32.352685   13693 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 17:28:32.352693   13693 kubeadm.go:310] 
	I0802 17:28:32.352807   13693 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 17:28:32.352912   13693 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 17:28:32.352921   13693 kubeadm.go:310] 
	I0802 17:28:32.353027   13693 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zy0nf3.h41pvfnv7qqy1skc \
	I0802 17:28:32.353148   13693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 17:28:32.353178   13693 kubeadm.go:310] 	--control-plane 
	I0802 17:28:32.353186   13693 kubeadm.go:310] 
	I0802 17:28:32.353301   13693 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 17:28:32.353310   13693 kubeadm.go:310] 
	I0802 17:28:32.353422   13693 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zy0nf3.h41pvfnv7qqy1skc \
	I0802 17:28:32.353564   13693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 17:28:32.353578   13693 cni.go:84] Creating CNI manager for ""
	I0802 17:28:32.353586   13693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 17:28:32.354926   13693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 17:28:32.356125   13693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 17:28:32.365893   13693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 17:28:32.382705   13693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 17:28:32.382770   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:32.382791   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-892214 minikube.k8s.io/updated_at=2024_08_02T17_28_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=addons-892214 minikube.k8s.io/primary=true
	I0802 17:28:32.410380   13693 ops.go:34] apiserver oom_adj: -16
	I0802 17:28:32.506457   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:33.006704   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:33.506661   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:34.006795   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:34.507010   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:35.007446   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:35.507523   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:36.006518   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:36.506988   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:37.007428   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:37.507387   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:38.006517   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:38.506538   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:39.007255   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:39.506981   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:40.007437   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:40.507292   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:41.007047   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:41.506743   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:42.007004   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:42.506819   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:43.007204   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:43.506873   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:44.007222   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:44.506801   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:45.006566   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:45.507172   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:46.007264   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:46.113047   13693 kubeadm.go:1113] duration metric: took 13.730333253s to wait for elevateKubeSystemPrivileges
	I0802 17:28:46.113085   13693 kubeadm.go:394] duration metric: took 23.929215755s to StartCluster
	I0802 17:28:46.113106   13693 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:46.113226   13693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:28:46.113576   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:46.113828   13693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 17:28:46.113829   13693 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:28:46.113875   13693 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0802 17:28:46.113984   13693 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-892214"
	I0802 17:28:46.114000   13693 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-892214"
	I0802 17:28:46.114003   13693 addons.go:69] Setting metrics-server=true in profile "addons-892214"
	I0802 17:28:46.114031   13693 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-892214"
	I0802 17:28:46.114038   13693 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-892214"
	I0802 17:28:46.114048   13693 addons.go:69] Setting ingress=true in profile "addons-892214"
	I0802 17:28:46.114039   13693 addons.go:69] Setting default-storageclass=true in profile "addons-892214"
	I0802 17:28:46.114060   13693 config.go:182] Loaded profile config "addons-892214": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:28:46.114075   13693 addons.go:69] Setting registry=true in profile "addons-892214"
	I0802 17:28:46.114078   13693 addons.go:69] Setting ingress-dns=true in profile "addons-892214"
	I0802 17:28:46.114078   13693 addons.go:69] Setting cloud-spanner=true in profile "addons-892214"
	I0802 17:28:46.114081   13693 addons.go:69] Setting inspektor-gadget=true in profile "addons-892214"
	I0802 17:28:46.114093   13693 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-892214"
	I0802 17:28:46.114103   13693 addons.go:234] Setting addon cloud-spanner=true in "addons-892214"
	I0802 17:28:46.114103   13693 addons.go:69] Setting volcano=true in profile "addons-892214"
	I0802 17:28:46.114106   13693 addons.go:69] Setting volumesnapshots=true in profile "addons-892214"
	I0802 17:28:46.114038   13693 addons.go:234] Setting addon metrics-server=true in "addons-892214"
	I0802 17:28:46.114116   13693 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-892214"
	I0802 17:28:46.114120   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114122   13693 addons.go:234] Setting addon volcano=true in "addons-892214"
	I0802 17:28:46.114125   13693 addons.go:234] Setting addon volumesnapshots=true in "addons-892214"
	I0802 17:28:46.114140   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114095   13693 addons.go:234] Setting addon registry=true in "addons-892214"
	I0802 17:28:46.114179   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114105   13693 addons.go:234] Setting addon inspektor-gadget=true in "addons-892214"
	I0802 17:28:46.114292   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114081   13693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-892214"
	I0802 17:28:46.114140   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114545   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114556   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114562   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114576   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114579   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114595   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114066   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114545   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114671   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114687   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114703   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114710   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114071   13693 addons.go:234] Setting addon ingress=true in "addons-892214"
	I0802 17:28:46.113988   13693 addons.go:69] Setting yakd=true in profile "addons-892214"
	I0802 17:28:46.114725   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114744   13693 addons.go:234] Setting addon yakd=true in "addons-892214"
	I0802 17:28:46.114068   13693 addons.go:69] Setting gcp-auth=true in profile "addons-892214"
	I0802 17:28:46.114039   13693 addons.go:69] Setting helm-tiller=true in profile "addons-892214"
	I0802 17:28:46.114776   13693 mustload.go:65] Loading cluster: addons-892214
	I0802 17:28:46.114095   13693 addons.go:234] Setting addon ingress-dns=true in "addons-892214"
	I0802 17:28:46.114148   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114796   13693 addons.go:234] Setting addon helm-tiller=true in "addons-892214"
	I0802 17:28:46.114110   13693 addons.go:69] Setting storage-provisioner=true in profile "addons-892214"
	I0802 17:28:46.114069   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114921   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114942   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114970   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114894   13693 addons.go:234] Setting addon storage-provisioner=true in "addons-892214"
	I0802 17:28:46.115032   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.115096   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115138   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.115145   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.115211   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115231   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.115270   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115272   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115286   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.115295   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.115381   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.115418   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115456   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.115523   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115563   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.116001   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.116324   13693 out.go:177] * Verifying Kubernetes components...
	I0802 17:28:46.116401   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.116422   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.127412   13693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:28:46.134986   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44247
	I0802 17:28:46.135355   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I0802 17:28:46.135534   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.135649   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0802 17:28:46.135885   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.136077   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.136095   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.136234   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.136376   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.136388   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.136444   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.136677   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.136702   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.136758   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.136928   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.136997   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.137368   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.137403   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.137536   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.137570   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.138247   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
	I0802 17:28:46.138676   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.139158   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.139177   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.139478   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.139992   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.140027   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.143478   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.143516   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.144682   13693 config.go:182] Loaded profile config "addons-892214": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:28:46.145019   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.145051   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.145589   13693 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-892214"
	I0802 17:28:46.145641   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.145997   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.146028   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.161117   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33651
	I0802 17:28:46.162229   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0802 17:28:46.162811   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.162922   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I0802 17:28:46.165292   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0802 17:28:46.165318   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0802 17:28:46.165427   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.165785   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.165834   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.166266   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.166276   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.166284   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.166293   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.166612   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.166612   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.166797   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.167184   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.167202   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.167206   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.167236   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.167549   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.168127   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.168169   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.168409   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.168493   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.168512   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.168844   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.169190   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.169210   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.169224   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.169413   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.169453   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.169648   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.170204   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.170236   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.171217   13693 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0802 17:28:46.172787   13693 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0802 17:28:46.172811   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0802 17:28:46.172829   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.176064   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.176622   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.176650   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.176797   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.176960   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.177112   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.177235   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.189148   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42523
	I0802 17:28:46.189600   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.190087   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.190100   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.190385   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.190788   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.190811   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.191574   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0802 17:28:46.191988   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.192126   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
	I0802 17:28:46.192532   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.192548   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.192605   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.192961   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.193684   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.193742   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0802 17:28:46.193748   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.193764   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.194235   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.194700   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.194760   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.194774   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.195358   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.195412   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.195592   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.195649   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.195991   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.196026   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.196216   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.197741   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.199496   13693 out.go:177]   - Using image docker.io/registry:2.8.3
	I0802 17:28:46.200457   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0802 17:28:46.201687   13693 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0802 17:28:46.202331   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.202610   13693 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0802 17:28:46.202625   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0802 17:28:46.202642   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.205280   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34463
	I0802 17:28:46.206430   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.206726   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.207256   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.207278   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.207329   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.207346   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.207986   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.208015   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.208048   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.208227   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.208271   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.209203   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.210019   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.210106   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.210382   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.210856   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.211693   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.211738   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.212095   13693 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0802 17:28:46.212413   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43253
	I0802 17:28:46.213032   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.213268   13693 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0802 17:28:46.213282   13693 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0802 17:28:46.213301   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.213809   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.213825   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.214172   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.214736   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.214775   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.215032   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I0802 17:28:46.215882   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0802 17:28:46.216286   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.216959   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45849
	I0802 17:28:46.217797   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.217813   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.218368   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0802 17:28:46.218603   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.218710   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.218760   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.218799   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42883
	I0802 17:28:46.219081   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.219167   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.219563   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.219673   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.219699   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.219597   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.219719   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.219910   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.220067   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.220230   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.220390   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.220650   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.220867   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.220990   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.221003   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.221054   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.221330   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37967
	I0802 17:28:46.221347   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.221407   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:46.221415   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:46.221628   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.221648   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.221719   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.221776   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:46.221795   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:46.221803   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:46.221810   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:46.221816   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:46.221830   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.221850   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.221964   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:46.221986   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:46.221993   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	W0802 17:28:46.222066   13693 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0802 17:28:46.222134   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.222452   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.223552   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.223769   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.223827   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I0802 17:28:46.224142   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.224964   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.224989   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.225391   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.225419   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.225623   13693 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0802 17:28:46.225799   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.225819   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.225970   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.226004   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.226111   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.226130   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.226407   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.226976   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.227004   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.227175   13693 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0802 17:28:46.227931   13693 out.go:177]   - Using image docker.io/busybox:stable
	I0802 17:28:46.227396   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.229043   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I0802 17:28:46.229177   13693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0802 17:28:46.229192   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0802 17:28:46.229211   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.229278   13693 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0802 17:28:46.229286   13693 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0802 17:28:46.229299   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.230270   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.230816   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.230833   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.231282   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I0802 17:28:46.231391   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.231807   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.232124   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.233004   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.233843   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.233903   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.234161   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.234177   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.234207   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.234264   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.234278   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.234544   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.234558   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.234670   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35913
	I0802 17:28:46.234776   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.234818   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.234859   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.234925   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.234957   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.234990   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.235024   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.235320   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.235551   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.236297   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0802 17:28:46.236738   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.237224   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.237239   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.237291   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.237892   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.238110   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.238339   13693 addons.go:234] Setting addon default-storageclass=true in "addons-892214"
	I0802 17:28:46.238382   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.238421   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.239063   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.239099   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.239332   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.239454   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.239773   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.239830   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.240126   13693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 17:28:46.240152   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.241817   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.241854   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0802 17:28:46.241925   13693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 17:28:46.241941   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 17:28:46.241958   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.243344   13693 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0802 17:28:46.243396   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0802 17:28:46.243411   13693 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0802 17:28:46.243430   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.244803   13693 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0802 17:28:46.244822   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0802 17:28:46.244840   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.245804   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.246554   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.246577   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.246795   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.247083   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.247245   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.247384   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.247808   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.248255   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.248276   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.248509   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.248674   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.248811   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.248933   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.249795   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.250158   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.250175   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.250355   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.250497   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.250627   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.250762   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.253583   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0802 17:28:46.253959   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.254408   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.254432   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.254760   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.255007   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.257385   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.259360   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32903
	I0802 17:28:46.259779   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.260305   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.260324   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.260385   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40449
	I0802 17:28:46.260829   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.260993   13693 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0802 17:28:46.261535   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.261553   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.261837   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0802 17:28:46.261870   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.262319   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.262336   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.262549   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.262637   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.262719   13693 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0802 17:28:46.262732   13693 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0802 17:28:46.262749   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.262798   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.263140   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.263156   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.263616   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.264019   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.264664   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.266110   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.266179   13693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0802 17:28:46.267249   13693 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0802 17:28:46.267483   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46465
	I0802 17:28:46.267898   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.268380   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.268398   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.268421   13693 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0802 17:28:46.268436   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0802 17:28:46.268450   13693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0802 17:28:46.268456   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.269090   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.269286   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.269601   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.270014   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.270051   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.270305   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.270456   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.270581   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.270701   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.270780   13693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0802 17:28:46.270972   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0802 17:28:46.271613   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.271726   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.272034   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.272251   13693 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0802 17:28:46.272266   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0802 17:28:46.272281   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.272302   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.272323   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.272789   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0802 17:28:46.272821   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.272840   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.272855   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.272895   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.273046   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.273068   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.273207   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.273347   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.275061   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.275118   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0802 17:28:46.275894   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.276403   13693 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0802 17:28:46.276419   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.276568   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.276600   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.276731   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.276852   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.276969   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.277781   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0802 17:28:46.277891   13693 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0802 17:28:46.277903   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0802 17:28:46.277918   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.279758   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0802 17:28:46.280637   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0802 17:28:46.281051   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.281262   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.281482   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.281498   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.281884   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0802 17:28:46.281891   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.281919   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.281939   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.282046   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.282081   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.282207   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.282239   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.282383   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.283584   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.284065   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0802 17:28:46.284347   13693 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 17:28:46.284365   13693 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 17:28:46.284382   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.285986   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0802 17:28:46.287090   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.287114   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0802 17:28:46.287574   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.287597   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.287767   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.288073   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.288233   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.288361   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.288448   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0802 17:28:46.288461   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0802 17:28:46.288477   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.291193   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.291569   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.291598   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.291805   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.291941   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.292053   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.292170   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.483280   13693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:28:46.483375   13693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 17:28:46.565500   13693 node_ready.go:35] waiting up to 6m0s for node "addons-892214" to be "Ready" ...
	I0802 17:28:46.568465   13693 node_ready.go:49] node "addons-892214" has status "Ready":"True"
	I0802 17:28:46.568488   13693 node_ready.go:38] duration metric: took 2.963043ms for node "addons-892214" to be "Ready" ...
	I0802 17:28:46.568499   13693 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:28:46.580321   13693 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p76fq" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:46.633113   13693 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0802 17:28:46.633135   13693 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0802 17:28:46.714795   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0802 17:28:46.735603   13693 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0802 17:28:46.735643   13693 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0802 17:28:46.736403   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 17:28:46.748407   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0802 17:28:46.775088   13693 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0802 17:28:46.775125   13693 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0802 17:28:46.789973   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0802 17:28:46.790999   13693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0802 17:28:46.791014   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0802 17:28:46.792704   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0802 17:28:46.792719   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0802 17:28:46.806854   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0802 17:28:46.819165   13693 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0802 17:28:46.819186   13693 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0802 17:28:46.822200   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0802 17:28:46.851146   13693 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0802 17:28:46.851172   13693 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0802 17:28:46.862480   13693 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0802 17:28:46.862508   13693 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0802 17:28:46.884028   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 17:28:46.942834   13693 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0802 17:28:46.942854   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0802 17:28:46.969522   13693 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0802 17:28:46.969549   13693 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0802 17:28:46.992935   13693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0802 17:28:46.992953   13693 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0802 17:28:47.005041   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0802 17:28:47.005060   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0802 17:28:47.026333   13693 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0802 17:28:47.026353   13693 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0802 17:28:47.032143   13693 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0802 17:28:47.032163   13693 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0802 17:28:47.054431   13693 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0802 17:28:47.054451   13693 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0802 17:28:47.157337   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0802 17:28:47.177491   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0802 17:28:47.177520   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0802 17:28:47.190565   13693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 17:28:47.190600   13693 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0802 17:28:47.207564   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0802 17:28:47.208626   13693 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0802 17:28:47.208650   13693 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0802 17:28:47.218327   13693 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0802 17:28:47.218348   13693 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0802 17:28:47.228613   13693 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0802 17:28:47.228633   13693 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0802 17:28:47.357958   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0802 17:28:47.357985   13693 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0802 17:28:47.358535   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0802 17:28:47.358557   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0802 17:28:47.386518   13693 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0802 17:28:47.386538   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0802 17:28:47.460968   13693 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0802 17:28:47.460991   13693 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0802 17:28:47.462462   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 17:28:47.566677   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0802 17:28:47.566717   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0802 17:28:47.576030   13693 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0802 17:28:47.576049   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0802 17:28:47.596382   13693 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0802 17:28:47.596412   13693 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0802 17:28:47.718094   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0802 17:28:47.936147   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0802 17:28:47.983378   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0802 17:28:47.983410   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0802 17:28:48.005520   13693 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0802 17:28:48.005552   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0802 17:28:48.174543   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0802 17:28:48.174604   13693 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0802 17:28:48.276395   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0802 17:28:48.492946   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0802 17:28:48.492972   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0802 17:28:48.597064   13693 pod_ready.go:102] pod "coredns-7db6d8ff4d-p76fq" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:48.623012   13693 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.139600751s)
	I0802 17:28:48.623048   13693 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0802 17:28:48.623067   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.908239944s)
	I0802 17:28:48.623148   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:48.623163   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:48.623469   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:48.623481   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:48.623491   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:48.623500   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:48.623795   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:48.623809   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:48.751484   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0802 17:28:48.751502   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0802 17:28:49.012217   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0802 17:28:49.012240   13693 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0802 17:28:49.127201   13693 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-892214" context rescaled to 1 replicas
	I0802 17:28:49.290739   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0802 17:28:50.632254   13693 pod_ready.go:102] pod "coredns-7db6d8ff4d-p76fq" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:51.165599   13693 pod_ready.go:92] pod "coredns-7db6d8ff4d-p76fq" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.165630   13693 pod_ready.go:81] duration metric: took 4.585283703s for pod "coredns-7db6d8ff4d-p76fq" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.165644   13693 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sk9vd" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.169108   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.432668876s)
	I0802 17:28:51.169159   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:51.169173   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:51.169468   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:51.169484   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:51.169493   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:51.169500   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:51.169521   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:51.169781   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:51.169795   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:51.169816   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:51.271034   13693 pod_ready.go:92] pod "coredns-7db6d8ff4d-sk9vd" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.271056   13693 pod_ready.go:81] duration metric: took 105.405602ms for pod "coredns-7db6d8ff4d-sk9vd" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.271066   13693 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.330385   13693 pod_ready.go:92] pod "etcd-addons-892214" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.330409   13693 pod_ready.go:81] duration metric: took 59.3373ms for pod "etcd-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.330421   13693 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.393267   13693 pod_ready.go:92] pod "kube-apiserver-addons-892214" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.393290   13693 pod_ready.go:81] duration metric: took 62.861059ms for pod "kube-apiserver-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.393303   13693 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.440066   13693 pod_ready.go:92] pod "kube-controller-manager-addons-892214" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.440097   13693 pod_ready.go:81] duration metric: took 46.784985ms for pod "kube-controller-manager-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.440110   13693 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-54c9t" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.501853   13693 pod_ready.go:92] pod "kube-proxy-54c9t" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.501879   13693 pod_ready.go:81] duration metric: took 61.753814ms for pod "kube-proxy-54c9t" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.501892   13693 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.911043   13693 pod_ready.go:92] pod "kube-scheduler-addons-892214" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.911063   13693 pod_ready.go:81] duration metric: took 409.163292ms for pod "kube-scheduler-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.911071   13693 pod_ready.go:38] duration metric: took 5.342552875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:28:51.911086   13693 api_server.go:52] waiting for apiserver process to appear ...
	I0802 17:28:51.911149   13693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:28:53.290522   13693 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0802 17:28:53.290558   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:53.293561   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:53.293941   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:53.293972   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:53.294162   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:53.294385   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:53.294568   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:53.294766   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:53.735559   13693 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0802 17:28:53.866599   13693 addons.go:234] Setting addon gcp-auth=true in "addons-892214"
	I0802 17:28:53.866657   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:53.866972   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:53.866998   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:53.882142   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35877
	I0802 17:28:53.882640   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:53.883069   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:53.883085   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:53.883410   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:53.883858   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:53.883883   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:53.899244   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
	I0802 17:28:53.899639   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:53.900084   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:53.900106   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:53.900415   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:53.900626   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:53.902410   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:53.902634   13693 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0802 17:28:53.902659   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:53.905316   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:53.905898   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:53.905926   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:53.906070   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:53.906236   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:53.906434   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:53.906621   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:54.270164   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.521724443s)
	I0802 17:28:54.270208   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270217   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270300   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.480298903s)
	I0802 17:28:54.270347   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270364   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270411   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.463531122s)
	I0802 17:28:54.270438   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.448220756s)
	I0802 17:28:54.270443   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270453   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270457   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270466   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270516   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.270547   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.386494807s)
	I0802 17:28:54.270544   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.270562   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270568   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.270570   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270581   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270590   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270628   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.113264304s)
	I0802 17:28:54.270652   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270661   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270724   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.063131391s)
	I0802 17:28:54.270737   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270744   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270822   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.80834s)
	I0802 17:28:54.270841   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270849   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270919   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.552798995s)
	I0802 17:28:54.270934   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270942   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270984   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271003   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271016   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271026   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271032   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271032   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271039   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271043   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271053   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271055   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271067   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.334870873s)
	I0802 17:28:54.271081   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271045   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	W0802 17:28:54.271094   13693 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0802 17:28:54.271131   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271144   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271142   13693 retry.go:31] will retry after 140.165343ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0802 17:28:54.271152   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271167   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271168   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271188   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271195   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271203   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271209   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271277   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.994847417s)
	I0802 17:28:54.271283   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271299   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271305   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271308   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271313   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271322   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271330   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271071   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271346   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271358   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271363   13693 addons.go:475] Verifying addon ingress=true in "addons-892214"
	I0802 17:28:54.271379   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271403   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271410   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271416   13693 addons.go:475] Verifying addon registry=true in "addons-892214"
	I0802 17:28:54.271856   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271881   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271888   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271896   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271902   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271945   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271964   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271970   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271978   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271987   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.272021   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.272040   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.272047   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.272470   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.272493   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.272500   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.272624   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.272638   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.272761   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.272831   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.272840   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.272866   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.272874   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271097   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.273350   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.273362   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.273370   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.273377   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.273675   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.273699   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.273706   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.274028   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.274047   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.274052   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.274493   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.274520   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.275722   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.275737   13693 out.go:177] * Verifying registry addon...
	I0802 17:28:54.274947   13693 out.go:177] * Verifying ingress addon...
	I0802 17:28:54.274985   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.275009   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.275025   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.275041   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.276888   13693 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-892214 service yakd-dashboard -n yakd-dashboard
	
	I0802 17:28:54.277252   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.277261   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.277263   13693 addons.go:475] Verifying addon metrics-server=true in "addons-892214"
	I0802 17:28:54.278721   13693 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0802 17:28:54.278853   13693 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0802 17:28:54.322893   13693 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0802 17:28:54.322914   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:54.342343   13693 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0802 17:28:54.342373   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:54.346564   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.346589   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.346962   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.346978   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	W0802 17:28:54.347062   13693 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0802 17:28:54.362192   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.362212   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.362569   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.362612   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.362623   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.412106   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0802 17:28:54.791977   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:54.792436   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:55.292085   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:55.292634   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:55.341285   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.050501605s)
	I0802 17:28:55.341327   13693 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.43015665s)
	I0802 17:28:55.341357   13693 api_server.go:72] duration metric: took 9.227501733s to wait for apiserver process to appear ...
	I0802 17:28:55.341365   13693 api_server.go:88] waiting for apiserver healthz status ...
	I0802 17:28:55.341367   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:55.341370   13693 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.438718764s)
	I0802 17:28:55.341386   13693 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0802 17:28:55.341387   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:55.341860   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:55.341863   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:55.341884   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:55.341892   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:55.341898   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:55.342178   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:55.342192   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:55.342209   13693 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-892214"
	I0802 17:28:55.342827   13693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0802 17:28:55.343591   13693 out.go:177] * Verifying csi-hostpath-driver addon...
	I0802 17:28:55.344972   13693 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0802 17:28:55.345683   13693 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0802 17:28:55.345893   13693 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0802 17:28:55.345985   13693 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0802 17:28:55.346004   13693 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0802 17:28:55.349040   13693 api_server.go:141] control plane version: v1.30.3
	I0802 17:28:55.349067   13693 api_server.go:131] duration metric: took 7.695276ms to wait for apiserver health ...
	I0802 17:28:55.349076   13693 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 17:28:55.366818   13693 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0802 17:28:55.366842   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:55.384261   13693 system_pods.go:59] 19 kube-system pods found
	I0802 17:28:55.384290   13693 system_pods.go:61] "coredns-7db6d8ff4d-p76fq" [670e26de-e1a8-40ee-acf4-c6d4ce7b4d93] Running
	I0802 17:28:55.384294   13693 system_pods.go:61] "coredns-7db6d8ff4d-sk9vd" [f3173627-759d-4a33-bb57-808ee415d0c5] Running
	I0802 17:28:55.384301   13693 system_pods.go:61] "csi-hostpath-attacher-0" [227e1c3a-6e8d-4f98-b792-283449039f73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0802 17:28:55.384305   13693 system_pods.go:61] "csi-hostpath-resizer-0" [c6d4e68a-d483-4117-a6fd-d0a19698bb11] Pending
	I0802 17:28:55.384311   13693 system_pods.go:61] "csi-hostpathplugin-f6h9n" [07a6f05c-29ec-4f7d-a29e-9e9eae21e2b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0802 17:28:55.384315   13693 system_pods.go:61] "etcd-addons-892214" [1e9d0def-524d-43ca-b29c-2e1e66d2d47b] Running
	I0802 17:28:55.384319   13693 system_pods.go:61] "kube-apiserver-addons-892214" [731cdc04-a76d-4875-a043-754d4bfcd0f9] Running
	I0802 17:28:55.384322   13693 system_pods.go:61] "kube-controller-manager-addons-892214" [363f4529-972c-4645-b8e2-843e479d5b37] Running
	I0802 17:28:55.384327   13693 system_pods.go:61] "kube-ingress-dns-minikube" [ee00722b-6b3b-4626-b856-87ffccf9d0d2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0802 17:28:55.384332   13693 system_pods.go:61] "kube-proxy-54c9t" [cd068d1d-f377-4c1f-b13b-45c1df8b4eb2] Running
	I0802 17:28:55.384336   13693 system_pods.go:61] "kube-scheduler-addons-892214" [4f9abb24-eb93-4fe9-9de4-929eb510eed3] Running
	I0802 17:28:55.384341   13693 system_pods.go:61] "metrics-server-c59844bb4-smv7j" [8ea8885b-a830-4d58-80b8-a67cc4f26748] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0802 17:28:55.384350   13693 system_pods.go:61] "nvidia-device-plugin-daemonset-7hdnl" [6af5e808-ef75-4f5b-8567-c08fc5f82515] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0802 17:28:55.384360   13693 system_pods.go:61] "registry-698f998955-cs8q7" [7d2c31bd-4360-46bd-82c0-b2258ba69944] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0802 17:28:55.384369   13693 system_pods.go:61] "registry-proxy-ntww4" [59de3da3-a31c-480b-8715-6dcecc3c01e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0802 17:28:55.384375   13693 system_pods.go:61] "snapshot-controller-745499f584-rzz47" [3db6259f-c6b3-4922-a452-a354b7ef788e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0802 17:28:55.384381   13693 system_pods.go:61] "snapshot-controller-745499f584-tnv6t" [394a2f4c-f536-4fb7-b476-2d8febddc5b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0802 17:28:55.384387   13693 system_pods.go:61] "storage-provisioner" [f4df5f76-bb9c-40a3-b0db-14ac7972a88f] Running
	I0802 17:28:55.384394   13693 system_pods.go:61] "tiller-deploy-6677d64bcd-t67mn" [a61d96f6-f02c-4320-a0ef-8562603e4751] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0802 17:28:55.384403   13693 system_pods.go:74] duration metric: took 35.320792ms to wait for pod list to return data ...
	I0802 17:28:55.384413   13693 default_sa.go:34] waiting for default service account to be created ...
	I0802 17:28:55.409155   13693 default_sa.go:45] found service account: "default"
	I0802 17:28:55.409179   13693 default_sa.go:55] duration metric: took 24.760351ms for default service account to be created ...
	I0802 17:28:55.409189   13693 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 17:28:55.425516   13693 system_pods.go:86] 19 kube-system pods found
	I0802 17:28:55.425547   13693 system_pods.go:89] "coredns-7db6d8ff4d-p76fq" [670e26de-e1a8-40ee-acf4-c6d4ce7b4d93] Running
	I0802 17:28:55.425552   13693 system_pods.go:89] "coredns-7db6d8ff4d-sk9vd" [f3173627-759d-4a33-bb57-808ee415d0c5] Running
	I0802 17:28:55.425559   13693 system_pods.go:89] "csi-hostpath-attacher-0" [227e1c3a-6e8d-4f98-b792-283449039f73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0802 17:28:55.425564   13693 system_pods.go:89] "csi-hostpath-resizer-0" [c6d4e68a-d483-4117-a6fd-d0a19698bb11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0802 17:28:55.425573   13693 system_pods.go:89] "csi-hostpathplugin-f6h9n" [07a6f05c-29ec-4f7d-a29e-9e9eae21e2b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0802 17:28:55.425578   13693 system_pods.go:89] "etcd-addons-892214" [1e9d0def-524d-43ca-b29c-2e1e66d2d47b] Running
	I0802 17:28:55.425586   13693 system_pods.go:89] "kube-apiserver-addons-892214" [731cdc04-a76d-4875-a043-754d4bfcd0f9] Running
	I0802 17:28:55.425591   13693 system_pods.go:89] "kube-controller-manager-addons-892214" [363f4529-972c-4645-b8e2-843e479d5b37] Running
	I0802 17:28:55.425596   13693 system_pods.go:89] "kube-ingress-dns-minikube" [ee00722b-6b3b-4626-b856-87ffccf9d0d2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0802 17:28:55.425600   13693 system_pods.go:89] "kube-proxy-54c9t" [cd068d1d-f377-4c1f-b13b-45c1df8b4eb2] Running
	I0802 17:28:55.425604   13693 system_pods.go:89] "kube-scheduler-addons-892214" [4f9abb24-eb93-4fe9-9de4-929eb510eed3] Running
	I0802 17:28:55.425612   13693 system_pods.go:89] "metrics-server-c59844bb4-smv7j" [8ea8885b-a830-4d58-80b8-a67cc4f26748] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0802 17:28:55.425618   13693 system_pods.go:89] "nvidia-device-plugin-daemonset-7hdnl" [6af5e808-ef75-4f5b-8567-c08fc5f82515] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0802 17:28:55.425623   13693 system_pods.go:89] "registry-698f998955-cs8q7" [7d2c31bd-4360-46bd-82c0-b2258ba69944] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0802 17:28:55.425629   13693 system_pods.go:89] "registry-proxy-ntww4" [59de3da3-a31c-480b-8715-6dcecc3c01e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0802 17:28:55.425638   13693 system_pods.go:89] "snapshot-controller-745499f584-rzz47" [3db6259f-c6b3-4922-a452-a354b7ef788e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0802 17:28:55.425644   13693 system_pods.go:89] "snapshot-controller-745499f584-tnv6t" [394a2f4c-f536-4fb7-b476-2d8febddc5b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0802 17:28:55.425651   13693 system_pods.go:89] "storage-provisioner" [f4df5f76-bb9c-40a3-b0db-14ac7972a88f] Running
	I0802 17:28:55.425656   13693 system_pods.go:89] "tiller-deploy-6677d64bcd-t67mn" [a61d96f6-f02c-4320-a0ef-8562603e4751] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0802 17:28:55.425670   13693 system_pods.go:126] duration metric: took 16.471119ms to wait for k8s-apps to be running ...
	I0802 17:28:55.425683   13693 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 17:28:55.425726   13693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:28:55.430022   13693 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0802 17:28:55.430043   13693 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0802 17:28:55.454076   13693 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0802 17:28:55.454096   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0802 17:28:55.543160   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0802 17:28:55.783999   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:55.784959   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:55.853093   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:56.217709   13693 system_svc.go:56] duration metric: took 792.015759ms WaitForService to wait for kubelet
	I0802 17:28:56.217741   13693 kubeadm.go:582] duration metric: took 10.10388483s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:28:56.217766   13693 node_conditions.go:102] verifying NodePressure condition ...
	I0802 17:28:56.217874   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.805726082s)
	I0802 17:28:56.217926   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:56.217944   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:56.218167   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:56.218180   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:56.218188   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:56.218194   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:56.218434   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:56.218456   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:56.218440   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:56.220579   13693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:28:56.220598   13693 node_conditions.go:123] node cpu capacity is 2
	I0802 17:28:56.220609   13693 node_conditions.go:105] duration metric: took 2.838368ms to run NodePressure ...
	I0802 17:28:56.220622   13693 start.go:241] waiting for startup goroutines ...
	I0802 17:28:56.284744   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:56.285109   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:56.352062   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:56.861520   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:56.868169   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:56.914804   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:57.018522   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.475322379s)
	I0802 17:28:57.018597   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:57.018614   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:57.018884   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:57.018948   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:57.018966   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:57.018978   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:57.018990   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:57.019244   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:57.019261   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:57.019246   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:57.020765   13693 addons.go:475] Verifying addon gcp-auth=true in "addons-892214"
	I0802 17:28:57.022295   13693 out.go:177] * Verifying gcp-auth addon...
	I0802 17:28:57.024175   13693 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0802 17:28:57.040965   13693 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0802 17:28:57.040984   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:57.284003   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:57.285413   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:57.352566   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:57.529875   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:57.828315   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:57.828473   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:57.855603   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:58.030083   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:58.283963   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:58.284278   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:58.351277   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:58.527323   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:58.784793   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:58.784928   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:58.851288   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:59.028216   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:59.284931   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:59.284984   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:59.351326   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:59.527766   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:59.785454   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:59.785816   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:59.858184   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:00.028180   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:00.283845   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:00.284391   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:00.351972   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:00.528563   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:00.787357   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:00.787674   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:00.850721   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:01.027449   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:01.283605   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:01.283857   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:01.352853   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:01.529114   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:01.784456   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:01.785132   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:01.854608   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:02.029118   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:02.285036   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:02.285282   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:02.353966   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:02.527470   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:02.784032   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:02.785170   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:02.851508   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:03.027985   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:03.282779   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:03.283037   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:03.351165   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:03.527535   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:03.783694   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:03.783978   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:03.851205   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:04.027973   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:04.283363   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:04.283781   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:04.351973   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:04.527799   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:04.782743   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:04.785082   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:04.851888   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:05.028125   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:05.283673   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:05.283741   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:05.352174   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:05.528410   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:05.793272   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:05.793290   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:05.851567   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:06.028729   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:06.283622   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:06.284031   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:06.352769   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:06.528437   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:06.784417   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:06.784704   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:06.851636   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:07.028003   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:07.283389   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:07.284837   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:07.351526   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:07.527764   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:07.783791   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:07.788226   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:07.851389   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:08.028130   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:08.283634   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:08.284369   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:08.350879   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:08.527425   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:08.784139   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:08.784429   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:08.851024   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:09.027525   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:09.284239   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:09.284305   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:09.350531   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:09.528886   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:09.783671   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:09.785079   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:09.851258   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:10.028205   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:10.285810   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:10.285883   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:10.353499   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:10.527336   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:10.783960   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:10.784914   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:10.851187   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:11.027970   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:11.284561   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:11.286326   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:11.352860   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:11.531821   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:11.786129   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:11.786685   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:11.851210   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:12.028197   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:12.283903   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:12.284683   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:12.351693   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:12.528110   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:12.782942   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:12.783498   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:12.851220   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:13.027573   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:13.286152   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:13.287144   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:13.351558   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:13.528117   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:13.784226   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:13.784669   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:13.850695   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:14.028937   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:14.283758   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:14.284809   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:14.350935   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:14.527569   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:14.784772   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:14.785412   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:14.851815   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:15.028323   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:15.287890   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:15.288012   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:15.352522   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:15.527838   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:15.799529   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:15.800420   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:15.854119   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:16.028043   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:16.283744   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:16.283881   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:16.351792   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:16.527916   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:16.783401   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:16.784277   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:16.850906   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:17.027291   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:17.285109   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:17.285351   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:17.351823   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:17.528022   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:17.784654   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:17.785372   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:17.851867   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:18.329103   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:18.329904   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:18.330465   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:18.351059   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:18.528110   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:18.783693   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:18.783872   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:18.851665   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:19.028013   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:19.283974   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:19.284013   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:19.353821   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:19.530285   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:19.783724   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:19.784554   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:19.851079   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:20.027714   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:20.283725   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:20.283842   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:20.350755   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:20.529235   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:20.785042   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:20.785556   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:20.851784   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:21.028342   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:21.282891   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:21.283412   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:21.350698   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:21.527507   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:21.783750   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:21.784591   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:21.851461   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:22.027792   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:22.283684   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:22.284596   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:22.352095   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:22.534331   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:22.783884   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:22.784084   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:22.851481   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:23.027950   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:23.285051   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:23.285070   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:23.352894   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:23.527899   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:23.785263   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:23.785561   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:23.851175   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:24.029469   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:24.286740   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:24.286863   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:24.352851   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:24.527308   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:24.784679   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:24.784826   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:24.851521   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:25.029726   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:25.283525   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:25.284475   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:25.352280   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:25.528135   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:25.782385   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:25.783554   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:25.850944   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:26.027567   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:26.285373   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:26.287058   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:26.351612   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:26.527769   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:26.784301   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:26.785530   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:26.852865   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:27.028581   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:27.288328   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:27.289507   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:27.350513   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:27.527823   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:27.784433   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:27.784984   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:27.851575   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:28.027985   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:28.284865   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:28.284973   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:28.351260   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:28.527928   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:28.783702   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:28.783860   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:28.851424   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:29.027983   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:29.285017   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:29.285428   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:29.351721   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:29.528487   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:29.787711   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:29.789207   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:29.856303   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:30.027466   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:30.284990   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:30.285395   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:30.351790   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:30.527980   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:30.784093   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:30.784403   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:30.852085   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:31.028253   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:31.283764   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:31.284863   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:31.351621   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:31.533122   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:31.785884   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:31.787012   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:31.850544   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:32.027802   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:32.302724   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:32.303120   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:32.352034   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:32.528773   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:32.784949   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:32.785385   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:32.867342   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:33.027973   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:33.286986   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:33.288384   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:33.351447   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:33.528049   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:33.783800   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:33.783942   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:33.851536   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:34.028098   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:34.285222   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:34.285569   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:34.352266   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:34.528494   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:34.784581   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:34.784938   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:34.851658   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:35.028965   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:35.285635   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:35.288119   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:35.351496   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:35.529243   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:35.783734   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:35.783906   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:35.934983   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:36.027443   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:36.284573   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:36.284883   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:36.350940   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:36.527134   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:36.784614   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:36.784626   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:36.851705   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:37.027748   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:37.284267   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:37.285187   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:37.351442   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:37.527493   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:37.783982   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:37.785179   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:37.852710   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:38.029314   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:38.283730   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:38.283839   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:38.350947   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:38.527522   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:38.784667   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:38.785977   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:38.851165   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:39.028876   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:39.283502   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:39.284535   13693 kapi.go:107] duration metric: took 45.005812272s to wait for kubernetes.io/minikube-addons=registry ...
	I0802 17:29:39.351275   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:39.527957   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:39.783806   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:39.850673   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:40.028776   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:40.283413   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:40.351800   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:40.528209   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:40.783142   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:40.851171   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:41.028688   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:41.282946   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:41.351236   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:41.528135   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:41.783035   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:41.851293   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:42.027926   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:42.283138   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:42.356165   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:42.602922   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:42.784076   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:42.851620   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:43.027804   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:43.284023   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:43.351013   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:43.528042   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:43.783426   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:43.851629   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:44.028337   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:44.282652   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:44.350657   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:44.527961   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:45.056080   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:45.056541   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:45.056942   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:45.282862   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:45.351629   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:45.527634   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:45.783432   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:45.851272   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:46.028312   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:46.282702   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:46.350973   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:46.527435   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:46.783183   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:46.852930   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:47.028156   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:47.370988   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:47.373916   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:47.529509   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:47.783859   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:47.851457   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:48.028115   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:48.285916   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:48.351928   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:48.527331   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:48.783254   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:48.851244   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:49.028529   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:49.283835   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:49.353408   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:49.528064   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:49.782552   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:49.850278   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:50.027784   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:50.283771   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:50.352536   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:50.528362   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:50.783258   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:50.851453   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:51.028387   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:51.284859   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:51.351010   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:51.527304   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:51.783325   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:51.851305   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:52.028689   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:52.283638   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:52.350745   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:52.527814   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:53.033329   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:53.034413   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:53.034463   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:53.283905   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:53.351349   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:53.527381   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:53.783475   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:53.851784   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:54.028127   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:54.282613   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:54.350997   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:54.527875   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:54.785624   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:54.850832   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:55.033497   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:55.293610   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:55.350689   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:55.527938   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:55.784032   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:55.853182   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:56.027191   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:56.284221   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:56.353828   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:56.527749   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:56.782777   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:56.851207   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:57.031574   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:57.284499   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:57.352397   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:57.527872   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:57.800114   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:57.850852   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:58.028218   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:58.283011   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:58.351087   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:58.527530   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:58.783528   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:58.850662   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:59.028365   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:59.282952   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:59.351085   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:59.528454   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:59.786595   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:59.856200   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:00.028300   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:00.283271   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:00.350923   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:00.528374   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:00.783005   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:00.851386   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:01.028246   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:01.283628   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:01.350499   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:01.528441   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:01.790738   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:01.858792   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:02.320083   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:02.320826   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:02.350930   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:02.527930   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:02.784401   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:02.851472   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:03.027166   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:03.282683   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:03.356326   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:03.527399   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:03.782970   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:03.851394   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:04.027732   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:04.283616   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:04.350400   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:04.528511   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:04.966333   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:04.968645   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:05.028324   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:05.284073   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:05.351623   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:05.527780   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:05.784018   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:05.851283   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:06.028209   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:06.284304   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:06.351082   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:06.528943   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:06.786232   13693 kapi.go:107] duration metric: took 1m12.507374779s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0802 17:30:06.853268   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:07.027259   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:07.350996   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:07.538067   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:08.162654   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:08.164327   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:08.351582   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:08.528458   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:08.851465   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:09.027967   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:09.350551   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:09.528028   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:09.850791   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:10.032575   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:10.351443   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:10.528139   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:10.851218   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:11.027960   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:11.351306   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:11.527593   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:11.852063   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:12.028395   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:12.352640   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:12.527465   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:12.853283   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:13.028142   13693 kapi.go:107] duration metric: took 1m16.003960207s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0802 17:30:13.030072   13693 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-892214 cluster.
	I0802 17:30:13.031557   13693 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0802 17:30:13.032954   13693 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0802 17:30:13.376702   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:13.850864   13693 kapi.go:107] duration metric: took 1m18.504968965s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0802 17:30:13.852881   13693 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, helm-tiller, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0802 17:30:13.854184   13693 addons.go:510] duration metric: took 1m27.740310331s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns helm-tiller inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0802 17:30:13.854224   13693 start.go:246] waiting for cluster config update ...
	I0802 17:30:13.854249   13693 start.go:255] writing updated cluster config ...
	I0802 17:30:13.854517   13693 ssh_runner.go:195] Run: rm -f paused
	I0802 17:30:13.902530   13693 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 17:30:13.904477   13693 out.go:177] * Done! kubectl is now configured to use "addons-892214" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 02 17:33:33 addons-892214 crio[687]: time="2024-08-02 17:33:33.922829994Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722620013922798257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65b69721-a4ae-465f-8310-947705c461d6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:33:33 addons-892214 crio[687]: time="2024-08-02 17:33:33.925536105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1be52da-47b9-4eb8-b16c-48f5297cdeae name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:33:33 addons-892214 crio[687]: time="2024-08-02 17:33:33.925641422Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1be52da-47b9-4eb8-b16c-48f5297cdeae name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:33:33 addons-892214 crio[687]: time="2024-08-02 17:33:33.925977146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e10b59d4eecb74abe0df5a061564fe8277241c52c45154450138bd4e44fa831,PodSandboxId:a2836adc9b342a37c7d517a440c55235d1eec6a9596d97f5d5efd302cadcac50,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722620006890719981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m5mgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deaf7ef-f897-44a9-a367-3c6c60bb68fc,},Annotations:map[string]string{io.kubernetes.container.hash: d18b7951,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a277059d0f1883c326b05f898cd9893b36e490b43f833dc365fad150063640,PodSandboxId:9754415aeff5d053a49895b7801508b7d3317e01472faa321572fe1143554b06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722619865460114754,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc59e354-2e50-4658-9768-c1a886aff1aa,},Annotations:map[string]string{io.kubernet
es.container.hash: d36b3d16,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6adf8237f42cef5716242c66f6c3714a887a31742ac8235490a44eaa5341302,PodSandboxId:50ae3db6a32299f52784de81a8c2562b2f13665e9a59fbb2630e891989413348,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722619816993617258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da3a730e-656e-41dc-9
be1-768d8d360cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 70f337e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc277ab8447cd0197fdb7efbfbb840a7d05fd1175186bf8c40541a8c73cdbd2,PodSandboxId:edb501abf8809a4819c9f3ebf7a1c885c001d8f0ab150e849cb08b7859e73d8c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722619780573079093,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4ghlh,io.kubernetes.pod.names
pace: local-path-storage,io.kubernetes.pod.uid: c9f16559-1e63-465e-8e8e-47fcf6b7535d,},Annotations:map[string]string{io.kubernetes.container.hash: c3a47647,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab25730f9ac6e2cc5fe0f1219e2ff23d173087436602c41f016cfeaa21cfa230,PodSandboxId:5a6106fe3cb999308d278a901379e243d5852e5909e5f2e8e6168dc4265cf702,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722619771349736858,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metri
cs-server-c59844bb4-smv7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea8885b-a830-4d58-80b8-a67cc4f26748,},Annotations:map[string]string{io.kubernetes.container.hash: 6d64523a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ce9fb7b9bc996c5be8b385a7517b8930e8b30ff0d5cabd81be015b26da9649,PodSandboxId:9ab9eff506f73aa947624accb9694fb47ae9410e5729aebf45e3faa29b51586a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722619732568873183,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4df5f76-bb9c-40a3-b0db-14ac7972a88f,},Annotations:map[string]string{io.kubernetes.container.hash: d5774e50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd173b9de65233df543ccbe56ec279179c4d707ab3764872d0bbf1188995bd3,PodSandboxId:5863a810d3cae8dd863a9f250648cc94ea09bc0a8eb155d95087dbe4c87dbba0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722619729060551528,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sk9vd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3173627-759d-4a33-bb57-808ee415d0c5,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54f0d9c8ff29df8157867f63de207612ea99b2723567955d05135014303538c,PodSandboxId:6c53e375d81c8365f0d0e5e0048683abafc5e6ec01726d01e54bb317a2dd657e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722619727093009673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54c9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd068d1d-f377-4c1f-b13b-45c1df8b4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6701c72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad1f1d7d140c75d74d0c58613bbbc1f088e6e00803d0fa2bbc4c4327b5aca2f9,PodSandboxId:2c551c1f8ac4561b470c5c4b2412d4ea119df5dc2ff769c40410eb1217c5ce87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722619706380858141,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9bd0649440d8a7a0d3586b7c1ed3f8,},Annotations:map[string]string{io.kubernetes.container.hash: b2751d44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9b12187ebce275afbbd1f90da2a34131379c6e1b57c0f0c6d6e5b7373a8ef6,PodSandboxId:a5207cbdabdb7aa1b7356b4a13a80d0b0557878f3ae8010b053dbea7cf39fede,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722619706309891757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e815c1f6abe53ec260e4ea81309e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b777f9568ba74a8b254bd1c9d44d99a014205335a86a1d6a1626662be88edd,PodSandboxId:da6052a1b07a528db074b430065d432725a64212f473be92902437c0195dfaff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722619706331682857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d001318cb520bd66242c1c022a2feb0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e5b4ce630c7553e859a8a23cb6c2a4d2fe9022324b3c7504826789757a2ca,PodSandboxId:40c82970196e3b3fa0f8740a4d529d9b150c277f8322341b3a4ff1ed295cf89d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722619706296440181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc4102938a41902a715ba0b7b11dc9f6,},Annotations:map[string]string{io.kubernetes.container.hash: f045d02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1be52da-47b9-4eb8-b16c-48f5297cdeae name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:33:33 addons-892214 crio[687]: time="2024-08-02 17:33:33.963509167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0a0413d-f626-4e84-994c-54c1b404de04 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:33:33 addons-892214 crio[687]: time="2024-08-02 17:33:33.963749832Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0a0413d-f626-4e84-994c-54c1b404de04 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:33:33 addons-892214 crio[687]: time="2024-08-02 17:33:33.965504621Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81b9a6c4-7622-4448-892b-759522029abb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:33:33 addons-892214 crio[687]: time="2024-08-02 17:33:33.966959025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722620013966933365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81b9a6c4-7622-4448-892b-759522029abb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:33:33 addons-892214 crio[687]: time="2024-08-02 17:33:33.967387685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dca71248-2c19-47ff-acc4-fca6c4dae097 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:33:33 addons-892214 crio[687]: time="2024-08-02 17:33:33.967442581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dca71248-2c19-47ff-acc4-fca6c4dae097 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:33:33 addons-892214 crio[687]: time="2024-08-02 17:33:33.967730647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e10b59d4eecb74abe0df5a061564fe8277241c52c45154450138bd4e44fa831,PodSandboxId:a2836adc9b342a37c7d517a440c55235d1eec6a9596d97f5d5efd302cadcac50,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722620006890719981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m5mgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deaf7ef-f897-44a9-a367-3c6c60bb68fc,},Annotations:map[string]string{io.kubernetes.container.hash: d18b7951,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a277059d0f1883c326b05f898cd9893b36e490b43f833dc365fad150063640,PodSandboxId:9754415aeff5d053a49895b7801508b7d3317e01472faa321572fe1143554b06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722619865460114754,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc59e354-2e50-4658-9768-c1a886aff1aa,},Annotations:map[string]string{io.kubernet
es.container.hash: d36b3d16,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6adf8237f42cef5716242c66f6c3714a887a31742ac8235490a44eaa5341302,PodSandboxId:50ae3db6a32299f52784de81a8c2562b2f13665e9a59fbb2630e891989413348,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722619816993617258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da3a730e-656e-41dc-9
be1-768d8d360cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 70f337e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc277ab8447cd0197fdb7efbfbb840a7d05fd1175186bf8c40541a8c73cdbd2,PodSandboxId:edb501abf8809a4819c9f3ebf7a1c885c001d8f0ab150e849cb08b7859e73d8c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722619780573079093,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4ghlh,io.kubernetes.pod.names
pace: local-path-storage,io.kubernetes.pod.uid: c9f16559-1e63-465e-8e8e-47fcf6b7535d,},Annotations:map[string]string{io.kubernetes.container.hash: c3a47647,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab25730f9ac6e2cc5fe0f1219e2ff23d173087436602c41f016cfeaa21cfa230,PodSandboxId:5a6106fe3cb999308d278a901379e243d5852e5909e5f2e8e6168dc4265cf702,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722619771349736858,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metri
cs-server-c59844bb4-smv7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea8885b-a830-4d58-80b8-a67cc4f26748,},Annotations:map[string]string{io.kubernetes.container.hash: 6d64523a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ce9fb7b9bc996c5be8b385a7517b8930e8b30ff0d5cabd81be015b26da9649,PodSandboxId:9ab9eff506f73aa947624accb9694fb47ae9410e5729aebf45e3faa29b51586a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722619732568873183,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4df5f76-bb9c-40a3-b0db-14ac7972a88f,},Annotations:map[string]string{io.kubernetes.container.hash: d5774e50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd173b9de65233df543ccbe56ec279179c4d707ab3764872d0bbf1188995bd3,PodSandboxId:5863a810d3cae8dd863a9f250648cc94ea09bc0a8eb155d95087dbe4c87dbba0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722619729060551528,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sk9vd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3173627-759d-4a33-bb57-808ee415d0c5,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54f0d9c8ff29df8157867f63de207612ea99b2723567955d05135014303538c,PodSandboxId:6c53e375d81c8365f0d0e5e0048683abafc5e6ec01726d01e54bb317a2dd657e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722619727093009673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54c9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd068d1d-f377-4c1f-b13b-45c1df8b4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6701c72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad1f1d7d140c75d74d0c58613bbbc1f088e6e00803d0fa2bbc4c4327b5aca2f9,PodSandboxId:2c551c1f8ac4561b470c5c4b2412d4ea119df5dc2ff769c40410eb1217c5ce87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722619706380858141,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9bd0649440d8a7a0d3586b7c1ed3f8,},Annotations:map[string]string{io.kubernetes.container.hash: b2751d44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9b12187ebce275afbbd1f90da2a34131379c6e1b57c0f0c6d6e5b7373a8ef6,PodSandboxId:a5207cbdabdb7aa1b7356b4a13a80d0b0557878f3ae8010b053dbea7cf39fede,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722619706309891757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e815c1f6abe53ec260e4ea81309e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b777f9568ba74a8b254bd1c9d44d99a014205335a86a1d6a1626662be88edd,PodSandboxId:da6052a1b07a528db074b430065d432725a64212f473be92902437c0195dfaff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722619706331682857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d001318cb520bd66242c1c022a2feb0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e5b4ce630c7553e859a8a23cb6c2a4d2fe9022324b3c7504826789757a2ca,PodSandboxId:40c82970196e3b3fa0f8740a4d529d9b150c277f8322341b3a4ff1ed295cf89d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722619706296440181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc4102938a41902a715ba0b7b11dc9f6,},Annotations:map[string]string{io.kubernetes.container.hash: f045d02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dca71248-2c19-47ff-acc4-fca6c4dae097 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.002830402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35fe20c6-79aa-4428-803a-e7a57ce2294d name=/runtime.v1.RuntimeService/Version
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.002920187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35fe20c6-79aa-4428-803a-e7a57ce2294d name=/runtime.v1.RuntimeService/Version
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.003934319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f26c0b52-a4e2-4f60-99ea-47f69dd005d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.005161124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722620014005134055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f26c0b52-a4e2-4f60-99ea-47f69dd005d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.005679569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59731b84-25ca-4154-b034-c07c84e20c97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.005752848Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59731b84-25ca-4154-b034-c07c84e20c97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.006180763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e10b59d4eecb74abe0df5a061564fe8277241c52c45154450138bd4e44fa831,PodSandboxId:a2836adc9b342a37c7d517a440c55235d1eec6a9596d97f5d5efd302cadcac50,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722620006890719981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m5mgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deaf7ef-f897-44a9-a367-3c6c60bb68fc,},Annotations:map[string]string{io.kubernetes.container.hash: d18b7951,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a277059d0f1883c326b05f898cd9893b36e490b43f833dc365fad150063640,PodSandboxId:9754415aeff5d053a49895b7801508b7d3317e01472faa321572fe1143554b06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722619865460114754,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc59e354-2e50-4658-9768-c1a886aff1aa,},Annotations:map[string]string{io.kubernet
es.container.hash: d36b3d16,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6adf8237f42cef5716242c66f6c3714a887a31742ac8235490a44eaa5341302,PodSandboxId:50ae3db6a32299f52784de81a8c2562b2f13665e9a59fbb2630e891989413348,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722619816993617258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da3a730e-656e-41dc-9
be1-768d8d360cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 70f337e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc277ab8447cd0197fdb7efbfbb840a7d05fd1175186bf8c40541a8c73cdbd2,PodSandboxId:edb501abf8809a4819c9f3ebf7a1c885c001d8f0ab150e849cb08b7859e73d8c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722619780573079093,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4ghlh,io.kubernetes.pod.names
pace: local-path-storage,io.kubernetes.pod.uid: c9f16559-1e63-465e-8e8e-47fcf6b7535d,},Annotations:map[string]string{io.kubernetes.container.hash: c3a47647,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab25730f9ac6e2cc5fe0f1219e2ff23d173087436602c41f016cfeaa21cfa230,PodSandboxId:5a6106fe3cb999308d278a901379e243d5852e5909e5f2e8e6168dc4265cf702,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722619771349736858,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metri
cs-server-c59844bb4-smv7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea8885b-a830-4d58-80b8-a67cc4f26748,},Annotations:map[string]string{io.kubernetes.container.hash: 6d64523a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ce9fb7b9bc996c5be8b385a7517b8930e8b30ff0d5cabd81be015b26da9649,PodSandboxId:9ab9eff506f73aa947624accb9694fb47ae9410e5729aebf45e3faa29b51586a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722619732568873183,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4df5f76-bb9c-40a3-b0db-14ac7972a88f,},Annotations:map[string]string{io.kubernetes.container.hash: d5774e50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd173b9de65233df543ccbe56ec279179c4d707ab3764872d0bbf1188995bd3,PodSandboxId:5863a810d3cae8dd863a9f250648cc94ea09bc0a8eb155d95087dbe4c87dbba0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722619729060551528,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sk9vd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3173627-759d-4a33-bb57-808ee415d0c5,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54f0d9c8ff29df8157867f63de207612ea99b2723567955d05135014303538c,PodSandboxId:6c53e375d81c8365f0d0e5e0048683abafc5e6ec01726d01e54bb317a2dd657e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722619727093009673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54c9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd068d1d-f377-4c1f-b13b-45c1df8b4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6701c72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad1f1d7d140c75d74d0c58613bbbc1f088e6e00803d0fa2bbc4c4327b5aca2f9,PodSandboxId:2c551c1f8ac4561b470c5c4b2412d4ea119df5dc2ff769c40410eb1217c5ce87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722619706380858141,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9bd0649440d8a7a0d3586b7c1ed3f8,},Annotations:map[string]string{io.kubernetes.container.hash: b2751d44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9b12187ebce275afbbd1f90da2a34131379c6e1b57c0f0c6d6e5b7373a8ef6,PodSandboxId:a5207cbdabdb7aa1b7356b4a13a80d0b0557878f3ae8010b053dbea7cf39fede,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722619706309891757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e815c1f6abe53ec260e4ea81309e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b777f9568ba74a8b254bd1c9d44d99a014205335a86a1d6a1626662be88edd,PodSandboxId:da6052a1b07a528db074b430065d432725a64212f473be92902437c0195dfaff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722619706331682857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d001318cb520bd66242c1c022a2feb0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e5b4ce630c7553e859a8a23cb6c2a4d2fe9022324b3c7504826789757a2ca,PodSandboxId:40c82970196e3b3fa0f8740a4d529d9b150c277f8322341b3a4ff1ed295cf89d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722619706296440181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc4102938a41902a715ba0b7b11dc9f6,},Annotations:map[string]string{io.kubernetes.container.hash: f045d02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59731b84-25ca-4154-b034-c07c84e20c97 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.038053534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=346f26a7-71a6-4815-beb9-c23fb617c65b name=/runtime.v1.RuntimeService/Version
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.038133271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=346f26a7-71a6-4815-beb9-c23fb617c65b name=/runtime.v1.RuntimeService/Version
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.039273025Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4aca974c-5b6a-497d-8040-639313dbde9f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.044038119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722620014044004488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4aca974c-5b6a-497d-8040-639313dbde9f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.044802560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60f87309-06ce-4702-8658-30dab8a06a4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.044873368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60f87309-06ce-4702-8658-30dab8a06a4c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:33:34 addons-892214 crio[687]: time="2024-08-02 17:33:34.045157552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e10b59d4eecb74abe0df5a061564fe8277241c52c45154450138bd4e44fa831,PodSandboxId:a2836adc9b342a37c7d517a440c55235d1eec6a9596d97f5d5efd302cadcac50,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722620006890719981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m5mgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deaf7ef-f897-44a9-a367-3c6c60bb68fc,},Annotations:map[string]string{io.kubernetes.container.hash: d18b7951,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a277059d0f1883c326b05f898cd9893b36e490b43f833dc365fad150063640,PodSandboxId:9754415aeff5d053a49895b7801508b7d3317e01472faa321572fe1143554b06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722619865460114754,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc59e354-2e50-4658-9768-c1a886aff1aa,},Annotations:map[string]string{io.kubernet
es.container.hash: d36b3d16,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6adf8237f42cef5716242c66f6c3714a887a31742ac8235490a44eaa5341302,PodSandboxId:50ae3db6a32299f52784de81a8c2562b2f13665e9a59fbb2630e891989413348,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722619816993617258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da3a730e-656e-41dc-9
be1-768d8d360cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 70f337e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc277ab8447cd0197fdb7efbfbb840a7d05fd1175186bf8c40541a8c73cdbd2,PodSandboxId:edb501abf8809a4819c9f3ebf7a1c885c001d8f0ab150e849cb08b7859e73d8c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722619780573079093,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4ghlh,io.kubernetes.pod.names
pace: local-path-storage,io.kubernetes.pod.uid: c9f16559-1e63-465e-8e8e-47fcf6b7535d,},Annotations:map[string]string{io.kubernetes.container.hash: c3a47647,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab25730f9ac6e2cc5fe0f1219e2ff23d173087436602c41f016cfeaa21cfa230,PodSandboxId:5a6106fe3cb999308d278a901379e243d5852e5909e5f2e8e6168dc4265cf702,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722619771349736858,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metri
cs-server-c59844bb4-smv7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea8885b-a830-4d58-80b8-a67cc4f26748,},Annotations:map[string]string{io.kubernetes.container.hash: 6d64523a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ce9fb7b9bc996c5be8b385a7517b8930e8b30ff0d5cabd81be015b26da9649,PodSandboxId:9ab9eff506f73aa947624accb9694fb47ae9410e5729aebf45e3faa29b51586a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722619732568873183,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4df5f76-bb9c-40a3-b0db-14ac7972a88f,},Annotations:map[string]string{io.kubernetes.container.hash: d5774e50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd173b9de65233df543ccbe56ec279179c4d707ab3764872d0bbf1188995bd3,PodSandboxId:5863a810d3cae8dd863a9f250648cc94ea09bc0a8eb155d95087dbe4c87dbba0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722619729060551528,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sk9vd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3173627-759d-4a33-bb57-808ee415d0c5,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54f0d9c8ff29df8157867f63de207612ea99b2723567955d05135014303538c,PodSandboxId:6c53e375d81c8365f0d0e5e0048683abafc5e6ec01726d01e54bb317a2dd657e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722619727093009673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54c9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd068d1d-f377-4c1f-b13b-45c1df8b4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6701c72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad1f1d7d140c75d74d0c58613bbbc1f088e6e00803d0fa2bbc4c4327b5aca2f9,PodSandboxId:2c551c1f8ac4561b470c5c4b2412d4ea119df5dc2ff769c40410eb1217c5ce87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722619706380858141,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9bd0649440d8a7a0d3586b7c1ed3f8,},Annotations:map[string]string{io.kubernetes.container.hash: b2751d44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9b12187ebce275afbbd1f90da2a34131379c6e1b57c0f0c6d6e5b7373a8ef6,PodSandboxId:a5207cbdabdb7aa1b7356b4a13a80d0b0557878f3ae8010b053dbea7cf39fede,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722619706309891757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e815c1f6abe53ec260e4ea81309e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b777f9568ba74a8b254bd1c9d44d99a014205335a86a1d6a1626662be88edd,PodSandboxId:da6052a1b07a528db074b430065d432725a64212f473be92902437c0195dfaff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722619706331682857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d001318cb520bd66242c1c022a2feb0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e5b4ce630c7553e859a8a23cb6c2a4d2fe9022324b3c7504826789757a2ca,PodSandboxId:40c82970196e3b3fa0f8740a4d529d9b150c277f8322341b3a4ff1ed295cf89d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722619706296440181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc4102938a41902a715ba0b7b11dc9f6,},Annotations:map[string]string{io.kubernetes.container.hash: f045d02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60f87309-06ce-4702-8658-30dab8a06a4c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e10b59d4eecb       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   7 seconds ago       Running             hello-world-app           0                   a2836adc9b342       hello-world-app-6778b5fc9f-m5mgj
	37a277059d0f1       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         2 minutes ago       Running             nginx                     0                   9754415aeff5d       nginx
	b6adf8237f42c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago       Running             busybox                   0                   50ae3db6a3229       busybox
	ebc277ab8447c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        3 minutes ago       Running             local-path-provisioner    0                   edb501abf8809       local-path-provisioner-8d985888d-4ghlh
	ab25730f9ac6e       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   4 minutes ago       Running             metrics-server            0                   5a6106fe3cb99       metrics-server-c59844bb4-smv7j
	01ce9fb7b9bc9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        4 minutes ago       Running             storage-provisioner       0                   9ab9eff506f73       storage-provisioner
	4dd173b9de652       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        4 minutes ago       Running             coredns                   0                   5863a810d3cae       coredns-7db6d8ff4d-sk9vd
	d54f0d9c8ff29       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        4 minutes ago       Running             kube-proxy                0                   6c53e375d81c8       kube-proxy-54c9t
	ad1f1d7d140c7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        5 minutes ago       Running             etcd                      0                   2c551c1f8ac45       etcd-addons-892214
	68b777f9568ba       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        5 minutes ago       Running             kube-controller-manager   0                   da6052a1b07a5       kube-controller-manager-addons-892214
	ce9b12187ebce       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        5 minutes ago       Running             kube-scheduler            0                   a5207cbdabdb7       kube-scheduler-addons-892214
	607e5b4ce630c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        5 minutes ago       Running             kube-apiserver            0                   40c82970196e3       kube-apiserver-addons-892214
	
	
	==> coredns [4dd173b9de65233df543ccbe56ec279179c4d707ab3764872d0bbf1188995bd3] <==
	[INFO] 10.244.0.7:57689 - 63838 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000590055s
	[INFO] 10.244.0.7:46225 - 55472 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000168109s
	[INFO] 10.244.0.7:46225 - 14770 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000259057s
	[INFO] 10.244.0.7:38115 - 17924 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058774s
	[INFO] 10.244.0.7:38115 - 49210 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103759s
	[INFO] 10.244.0.7:44593 - 4935 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000087892s
	[INFO] 10.244.0.7:44593 - 15937 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116761s
	[INFO] 10.244.0.7:51451 - 15065 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000116559s
	[INFO] 10.244.0.7:51451 - 3525 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00020012s
	[INFO] 10.244.0.7:51704 - 29619 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114502s
	[INFO] 10.244.0.7:51704 - 4017 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000205747s
	[INFO] 10.244.0.7:56800 - 26563 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090366s
	[INFO] 10.244.0.7:56800 - 17613 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000044849s
	[INFO] 10.244.0.7:57131 - 63757 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000053783s
	[INFO] 10.244.0.7:57131 - 39947 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000097075s
	[INFO] 10.244.0.22:36936 - 63718 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00050466s
	[INFO] 10.244.0.22:43964 - 21398 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000152662s
	[INFO] 10.244.0.22:39439 - 29347 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013128s
	[INFO] 10.244.0.22:40314 - 41123 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101775s
	[INFO] 10.244.0.22:45232 - 37518 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092809s
	[INFO] 10.244.0.22:57465 - 56830 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091685s
	[INFO] 10.244.0.22:43500 - 39329 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000364671s
	[INFO] 10.244.0.22:38717 - 14406 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000755682s
	[INFO] 10.244.0.26:46232 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000457151s
	[INFO] 10.244.0.26:50015 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000104755s
	
	
	==> describe nodes <==
	Name:               addons-892214
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-892214
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=addons-892214
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T17_28_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-892214
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:28:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-892214
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:33:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:32:05 +0000   Fri, 02 Aug 2024 17:28:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:32:05 +0000   Fri, 02 Aug 2024 17:28:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:32:05 +0000   Fri, 02 Aug 2024 17:28:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:32:05 +0000   Fri, 02 Aug 2024 17:28:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    addons-892214
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 64bd5dd688f344e499a7dc3b368671c8
	  System UUID:                64bd5dd6-88f3-44e4-99a7-dc3b368671c8
	  Boot ID:                    88934d9c-d3a5-495c-b37a-7f71b825103a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  default                     hello-world-app-6778b5fc9f-m5mgj          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 coredns-7db6d8ff4d-sk9vd                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m48s
	  kube-system                 etcd-addons-892214                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-apiserver-addons-892214              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-controller-manager-addons-892214     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-proxy-54c9t                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-scheduler-addons-892214              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 metrics-server-c59844bb4-smv7j            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m43s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  local-path-storage          local-path-provisioner-8d985888d-4ghlh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m46s  kube-proxy       
	  Normal  Starting                 5m3s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m3s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m3s   kubelet          Node addons-892214 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m3s   kubelet          Node addons-892214 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m3s   kubelet          Node addons-892214 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m2s   kubelet          Node addons-892214 status is now: NodeReady
	  Normal  RegisteredNode           4m49s  node-controller  Node addons-892214 event: Registered Node addons-892214 in Controller
	
	
	==> dmesg <==
	[  +5.077315] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.007679] kauditd_printk_skb: 132 callbacks suppressed
	[Aug 2 17:29] kauditd_printk_skb: 66 callbacks suppressed
	[ +24.299050] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.184468] kauditd_printk_skb: 32 callbacks suppressed
	[ +19.423020] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.040766] kauditd_printk_skb: 45 callbacks suppressed
	[Aug 2 17:30] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.256749] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.579427] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.204111] kauditd_printk_skb: 48 callbacks suppressed
	[ +24.964049] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.456287] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.927033] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.645032] kauditd_printk_skb: 43 callbacks suppressed
	[Aug 2 17:31] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.836290] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.034263] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.707953] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.818538] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.331447] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.430817] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.658139] kauditd_printk_skb: 30 callbacks suppressed
	[Aug 2 17:33] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.268032] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ad1f1d7d140c75d74d0c58613bbbc1f088e6e00803d0fa2bbc4c4327b5aca2f9] <==
	{"level":"info","ts":"2024-08-02T17:30:08.142139Z","caller":"traceutil/trace.go:171","msg":"trace[1369620422] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"451.900659ms","start":"2024-08-02T17:30:07.690224Z","end":"2024-08-02T17:30:08.142125Z","steps":["trace[1369620422] 'process raft request'  (duration: 451.797358ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:08.142293Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:30:07.690205Z","time spent":"452.029573ms","remote":"127.0.0.1:38152","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1111 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-08-02T17:30:08.142681Z","caller":"traceutil/trace.go:171","msg":"trace[35641469] linearizableReadLoop","detail":"{readStateIndex:1166; appliedIndex:1166; }","duration":"311.299357ms","start":"2024-08-02T17:30:07.831372Z","end":"2024-08-02T17:30:08.142672Z","steps":["trace[35641469] 'read index received'  (duration: 311.296218ms)","trace[35641469] 'applied index is now lower than readState.Index'  (duration: 2.487µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T17:30:08.142755Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"311.374229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T17:30:08.142829Z","caller":"traceutil/trace.go:171","msg":"trace[398693944] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1131; }","duration":"311.474427ms","start":"2024-08-02T17:30:07.831349Z","end":"2024-08-02T17:30:08.142823Z","steps":["trace[398693944] 'agreement among raft nodes before linearized reading'  (duration: 311.379284ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:08.142852Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:30:07.831329Z","time spent":"311.517092ms","remote":"127.0.0.1:38152","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":27,"request content":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" "}
	{"level":"warn","ts":"2024-08-02T17:30:08.144809Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.954433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85513"}
	{"level":"info","ts":"2024-08-02T17:30:08.145141Z","caller":"traceutil/trace.go:171","msg":"trace[141671573] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1132; }","duration":"309.308888ms","start":"2024-08-02T17:30:07.83582Z","end":"2024-08-02T17:30:08.145129Z","steps":["trace[141671573] 'agreement among raft nodes before linearized reading'  (duration: 308.830642ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:08.145277Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:30:07.835807Z","time spent":"309.456452ms","remote":"127.0.0.1:48198","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85535,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-08-02T17:30:08.14548Z","caller":"traceutil/trace.go:171","msg":"trace[1912328206] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"276.675583ms","start":"2024-08-02T17:30:07.868793Z","end":"2024-08-02T17:30:08.145468Z","steps":["trace[1912328206] 'process raft request'  (duration: 275.705732ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:08.145741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.183993ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11441"}
	{"level":"info","ts":"2024-08-02T17:30:08.146126Z","caller":"traceutil/trace.go:171","msg":"trace[1759529977] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1132; }","duration":"132.571052ms","start":"2024-08-02T17:30:08.013548Z","end":"2024-08-02T17:30:08.146119Z","steps":["trace[1759529977] 'agreement among raft nodes before linearized reading'  (duration: 132.095628ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:08.145783Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.5029ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T17:30:08.146434Z","caller":"traceutil/trace.go:171","msg":"trace[2094260723] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:1132; }","duration":"172.163278ms","start":"2024-08-02T17:30:07.974255Z","end":"2024-08-02T17:30:08.146419Z","steps":["trace[2094260723] 'agreement among raft nodes before linearized reading'  (duration: 171.506596ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T17:30:13.359928Z","caller":"traceutil/trace.go:171","msg":"trace[635625109] linearizableReadLoop","detail":"{readStateIndex:1196; appliedIndex:1195; }","duration":"128.815872ms","start":"2024-08-02T17:30:13.2311Z","end":"2024-08-02T17:30:13.359915Z","steps":["trace[635625109] 'read index received'  (duration: 128.692264ms)","trace[635625109] 'applied index is now lower than readState.Index'  (duration: 123.229µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T17:30:13.360202Z","caller":"traceutil/trace.go:171","msg":"trace[1245398561] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"205.519885ms","start":"2024-08-02T17:30:13.154669Z","end":"2024-08-02T17:30:13.360189Z","steps":["trace[1245398561] 'process raft request'  (duration: 205.161227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:13.360376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.260287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T17:30:13.3604Z","caller":"traceutil/trace.go:171","msg":"trace[1348319456] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1160; }","duration":"129.318179ms","start":"2024-08-02T17:30:13.231076Z","end":"2024-08-02T17:30:13.360394Z","steps":["trace[1348319456] 'agreement among raft nodes before linearized reading'  (duration: 129.253109ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T17:30:43.523122Z","caller":"traceutil/trace.go:171","msg":"trace[332859129] transaction","detail":"{read_only:false; response_revision:1315; number_of_response:1; }","duration":"100.725812ms","start":"2024-08-02T17:30:43.422363Z","end":"2024-08-02T17:30:43.523089Z","steps":["trace[332859129] 'process raft request'  (duration: 100.639758ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T17:31:10.949934Z","caller":"traceutil/trace.go:171","msg":"trace[1870216115] linearizableReadLoop","detail":"{readStateIndex:1567; appliedIndex:1566; }","duration":"331.058036ms","start":"2024-08-02T17:31:10.618849Z","end":"2024-08-02T17:31:10.949907Z","steps":["trace[1870216115] 'read index received'  (duration: 330.896049ms)","trace[1870216115] 'applied index is now lower than readState.Index'  (duration: 161.24µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T17:31:10.950136Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.239487ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:11390"}
	{"level":"info","ts":"2024-08-02T17:31:10.950172Z","caller":"traceutil/trace.go:171","msg":"trace[1742876273] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1511; }","duration":"331.339504ms","start":"2024-08-02T17:31:10.618824Z","end":"2024-08-02T17:31:10.950164Z","steps":["trace[1742876273] 'agreement among raft nodes before linearized reading'  (duration: 331.175428ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:31:10.950198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:31:10.618812Z","time spent":"331.376995ms","remote":"127.0.0.1:48198","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":11412,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-08-02T17:31:10.9503Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:31:10.541343Z","time spent":"408.9525ms","remote":"127.0.0.1:48032","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-08-02T17:32:24.365777Z","caller":"traceutil/trace.go:171","msg":"trace[2131580234] transaction","detail":"{read_only:false; response_revision:1902; number_of_response:1; }","duration":"207.028606ms","start":"2024-08-02T17:32:24.158704Z","end":"2024-08-02T17:32:24.365732Z","steps":["trace[2131580234] 'process raft request'  (duration: 206.902437ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:33:34 up 5 min,  0 users,  load average: 0.93, 0.91, 0.47
	Linux addons-892214 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [607e5b4ce630c7553e859a8a23cb6c2a4d2fe9022324b3c7504826789757a2ca] <==
	E0802 17:30:42.003611       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 17:30:42.021552       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0802 17:31:01.153890       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0802 17:31:01.322963       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.33.94"}
	I0802 17:31:02.954552       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0802 17:31:03.988682       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0802 17:31:16.881794       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.4:8443->10.244.0.30:38758: read: connection reset by peer
	I0802 17:31:19.696275       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0802 17:31:43.152522       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.227.161"}
	I0802 17:31:53.012768       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0802 17:31:53.012854       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0802 17:31:53.042200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0802 17:31:53.042261       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0802 17:31:53.061845       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0802 17:31:53.062061       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0802 17:31:53.065365       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0802 17:31:53.065497       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0802 17:31:53.097062       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0802 17:31:53.097111       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0802 17:31:54.062738       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0802 17:31:54.098130       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0802 17:31:54.109028       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0802 17:33:24.250032       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.126.37"}
	E0802 17:33:26.208495       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [68b777f9568ba74a8b254bd1c9d44d99a014205335a86a1d6a1626662be88edd] <==
	E0802 17:32:20.385380       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:32:24.396999       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:32:24.397119       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:32:29.183335       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:32:29.183450       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:32:32.513702       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:32:32.513808       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:32:50.898998       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:32:50.899190       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:33:05.857711       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:33:05.857793       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:33:10.535260       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:33:10.535404       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:33:18.501665       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:33:18.501765       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0802 17:33:24.074365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="52.501954ms"
	I0802 17:33:24.108169       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="29.011721ms"
	I0802 17:33:24.108329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="52.659µs"
	I0802 17:33:26.128483       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0802 17:33:26.135013       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0802 17:33:26.136896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="3.322µs"
	I0802 17:33:27.553040       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.807525ms"
	I0802 17:33:27.553487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="68.156µs"
	W0802 17:33:33.152206       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:33:33.152311       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [d54f0d9c8ff29df8157867f63de207612ea99b2723567955d05135014303538c] <==
	I0802 17:28:47.963981       1 server_linux.go:69] "Using iptables proxy"
	I0802 17:28:47.987118       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	I0802 17:28:48.064742       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 17:28:48.064800       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 17:28:48.064820       1 server_linux.go:165] "Using iptables Proxier"
	I0802 17:28:48.068312       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 17:28:48.068491       1 server.go:872] "Version info" version="v1.30.3"
	I0802 17:28:48.068502       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:28:48.069833       1 config.go:192] "Starting service config controller"
	I0802 17:28:48.069845       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 17:28:48.069868       1 config.go:101] "Starting endpoint slice config controller"
	I0802 17:28:48.069871       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 17:28:48.070325       1 config.go:319] "Starting node config controller"
	I0802 17:28:48.070331       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 17:28:48.170811       1 shared_informer.go:320] Caches are synced for node config
	I0802 17:28:48.170849       1 shared_informer.go:320] Caches are synced for service config
	I0802 17:28:48.170868       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ce9b12187ebce275afbbd1f90da2a34131379c6e1b57c0f0c6d6e5b7373a8ef6] <==
	W0802 17:28:29.014844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 17:28:29.014883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0802 17:28:29.014942       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 17:28:29.014967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 17:28:29.015014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 17:28:29.015036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0802 17:28:29.015167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 17:28:29.015211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 17:28:29.015506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 17:28:29.016663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 17:28:29.016936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 17:28:29.017781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 17:28:29.830785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 17:28:29.830898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 17:28:29.832734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0802 17:28:29.832795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0802 17:28:29.912526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 17:28:29.912687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0802 17:28:29.932556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 17:28:29.932636       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 17:28:30.094375       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 17:28:30.094506       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 17:28:30.194685       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 17:28:30.194748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0802 17:28:32.702856       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 17:33:25 addons-892214 kubelet[1287]: I0802 17:33:25.509971    1287 scope.go:117] "RemoveContainer" containerID="358c4f8d7e88dfefbf30df9f7a5b5ddaec14ac385b8809eaf4c58bac61874cc4"
	Aug 02 17:33:25 addons-892214 kubelet[1287]: I0802 17:33:25.539459    1287 scope.go:117] "RemoveContainer" containerID="358c4f8d7e88dfefbf30df9f7a5b5ddaec14ac385b8809eaf4c58bac61874cc4"
	Aug 02 17:33:25 addons-892214 kubelet[1287]: E0802 17:33:25.540426    1287 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"358c4f8d7e88dfefbf30df9f7a5b5ddaec14ac385b8809eaf4c58bac61874cc4\": container with ID starting with 358c4f8d7e88dfefbf30df9f7a5b5ddaec14ac385b8809eaf4c58bac61874cc4 not found: ID does not exist" containerID="358c4f8d7e88dfefbf30df9f7a5b5ddaec14ac385b8809eaf4c58bac61874cc4"
	Aug 02 17:33:25 addons-892214 kubelet[1287]: I0802 17:33:25.540474    1287 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"358c4f8d7e88dfefbf30df9f7a5b5ddaec14ac385b8809eaf4c58bac61874cc4"} err="failed to get container status \"358c4f8d7e88dfefbf30df9f7a5b5ddaec14ac385b8809eaf4c58bac61874cc4\": rpc error: code = NotFound desc = could not find container \"358c4f8d7e88dfefbf30df9f7a5b5ddaec14ac385b8809eaf4c58bac61874cc4\": container with ID starting with 358c4f8d7e88dfefbf30df9f7a5b5ddaec14ac385b8809eaf4c58bac61874cc4 not found: ID does not exist"
	Aug 02 17:33:25 addons-892214 kubelet[1287]: I0802 17:33:25.620019    1287 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee00722b-6b3b-4626-b856-87ffccf9d0d2" path="/var/lib/kubelet/pods/ee00722b-6b3b-4626-b856-87ffccf9d0d2/volumes"
	Aug 02 17:33:27 addons-892214 kubelet[1287]: I0802 17:33:27.619332    1287 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77e320e4-634c-4b3c-8f24-7fa79393d99e" path="/var/lib/kubelet/pods/77e320e4-634c-4b3c-8f24-7fa79393d99e/volumes"
	Aug 02 17:33:27 addons-892214 kubelet[1287]: I0802 17:33:27.619828    1287 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa1888d7-e51f-4ddf-bc96-f321f29790bd" path="/var/lib/kubelet/pods/aa1888d7-e51f-4ddf-bc96-f321f29790bd/volumes"
	Aug 02 17:33:29 addons-892214 kubelet[1287]: I0802 17:33:29.415008    1287 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d912acd-9496-437a-8eb6-7385332857af-webhook-cert\") pod \"2d912acd-9496-437a-8eb6-7385332857af\" (UID: \"2d912acd-9496-437a-8eb6-7385332857af\") "
	Aug 02 17:33:29 addons-892214 kubelet[1287]: I0802 17:33:29.415065    1287 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt24m\" (UniqueName: \"kubernetes.io/projected/2d912acd-9496-437a-8eb6-7385332857af-kube-api-access-kt24m\") pod \"2d912acd-9496-437a-8eb6-7385332857af\" (UID: \"2d912acd-9496-437a-8eb6-7385332857af\") "
	Aug 02 17:33:29 addons-892214 kubelet[1287]: I0802 17:33:29.417105    1287 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d912acd-9496-437a-8eb6-7385332857af-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2d912acd-9496-437a-8eb6-7385332857af" (UID: "2d912acd-9496-437a-8eb6-7385332857af"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 02 17:33:29 addons-892214 kubelet[1287]: I0802 17:33:29.417660    1287 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d912acd-9496-437a-8eb6-7385332857af-kube-api-access-kt24m" (OuterVolumeSpecName: "kube-api-access-kt24m") pod "2d912acd-9496-437a-8eb6-7385332857af" (UID: "2d912acd-9496-437a-8eb6-7385332857af"). InnerVolumeSpecName "kube-api-access-kt24m". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 02 17:33:29 addons-892214 kubelet[1287]: I0802 17:33:29.515305    1287 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2d912acd-9496-437a-8eb6-7385332857af-webhook-cert\") on node \"addons-892214\" DevicePath \"\""
	Aug 02 17:33:29 addons-892214 kubelet[1287]: I0802 17:33:29.515373    1287 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kt24m\" (UniqueName: \"kubernetes.io/projected/2d912acd-9496-437a-8eb6-7385332857af-kube-api-access-kt24m\") on node \"addons-892214\" DevicePath \"\""
	Aug 02 17:33:29 addons-892214 kubelet[1287]: I0802 17:33:29.539949    1287 scope.go:117] "RemoveContainer" containerID="a7fce4aadca149b8d5a99338c1460bbcfd63abe015cc41a529c3f8fc73f22010"
	Aug 02 17:33:29 addons-892214 kubelet[1287]: I0802 17:33:29.558528    1287 scope.go:117] "RemoveContainer" containerID="a7fce4aadca149b8d5a99338c1460bbcfd63abe015cc41a529c3f8fc73f22010"
	Aug 02 17:33:29 addons-892214 kubelet[1287]: E0802 17:33:29.559317    1287 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7fce4aadca149b8d5a99338c1460bbcfd63abe015cc41a529c3f8fc73f22010\": container with ID starting with a7fce4aadca149b8d5a99338c1460bbcfd63abe015cc41a529c3f8fc73f22010 not found: ID does not exist" containerID="a7fce4aadca149b8d5a99338c1460bbcfd63abe015cc41a529c3f8fc73f22010"
	Aug 02 17:33:29 addons-892214 kubelet[1287]: I0802 17:33:29.559358    1287 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7fce4aadca149b8d5a99338c1460bbcfd63abe015cc41a529c3f8fc73f22010"} err="failed to get container status \"a7fce4aadca149b8d5a99338c1460bbcfd63abe015cc41a529c3f8fc73f22010\": rpc error: code = NotFound desc = could not find container \"a7fce4aadca149b8d5a99338c1460bbcfd63abe015cc41a529c3f8fc73f22010\": container with ID starting with a7fce4aadca149b8d5a99338c1460bbcfd63abe015cc41a529c3f8fc73f22010 not found: ID does not exist"
	Aug 02 17:33:29 addons-892214 kubelet[1287]: I0802 17:33:29.620070    1287 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d912acd-9496-437a-8eb6-7385332857af" path="/var/lib/kubelet/pods/2d912acd-9496-437a-8eb6-7385332857af/volumes"
	Aug 02 17:33:31 addons-892214 kubelet[1287]: E0802 17:33:31.639294    1287 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:33:31 addons-892214 kubelet[1287]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:33:31 addons-892214 kubelet[1287]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:33:31 addons-892214 kubelet[1287]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:33:31 addons-892214 kubelet[1287]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:33:32 addons-892214 kubelet[1287]: I0802 17:33:32.146436    1287 scope.go:117] "RemoveContainer" containerID="dfb9a0173c90c8011d7113af56a38827c175cf0732ee7f3da88139b87044544d"
	Aug 02 17:33:32 addons-892214 kubelet[1287]: I0802 17:33:32.169036    1287 scope.go:117] "RemoveContainer" containerID="d21aa0be6fd5feacd6507d5eca90f471380137edab2d6ea54442791f2f79533e"
	
	
	==> storage-provisioner [01ce9fb7b9bc996c5be8b385a7517b8930e8b30ff0d5cabd81be015b26da9649] <==
	I0802 17:28:53.347445       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 17:28:53.545122       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 17:28:53.545191       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 17:28:53.694248       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 17:28:53.694757       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"125d3a2d-c910-47bf-b476-a112f54d5bfb", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-892214_fd75a1f7-8e7e-475d-80c0-f9f6b9f743bc became leader
	I0802 17:28:53.694949       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-892214_fd75a1f7-8e7e-475d-80c0-f9f6b9f743bc!
	I0802 17:28:53.796547       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-892214_fd75a1f7-8e7e-475d-80c0-f9f6b9f743bc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-892214 -n addons-892214
helpers_test.go:261: (dbg) Run:  kubectl --context addons-892214 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.20s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (366.4s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.586275ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-smv7j" [8ea8885b-a830-4d58-80b8-a67cc4f26748] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004000978s
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (66.984308ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 2m4.404626316s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (63.904908ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 2m8.410032439s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (66.20441ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 2m13.180091164s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (64.616394ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 2m21.106817719s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (68.961988ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 2m30.694457406s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (65.901945ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 2m46.614943481s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (66.142445ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 3m9.99701519s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (60.543276ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 3m37.239460754s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (68.482842ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 4m28.620575918s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (63.064071ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 5m25.185318839s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (66.952704ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 6m35.926226191s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-892214 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-892214 top pods -n kube-system: exit status 1 (63.229103ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-sk9vd, age: 8m2.244691745s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-892214 -n addons-892214
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-892214 logs -n 25: (1.174250956s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-399295                                                                     | download-only-399295 | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC | 02 Aug 24 17:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-711292 | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC |                     |
	|         | binary-mirror-711292                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42613                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-711292                                                                     | binary-mirror-711292 | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC | 02 Aug 24 17:27 UTC |
	| addons  | enable dashboard -p                                                                         | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC |                     |
	|         | addons-892214                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC |                     |
	|         | addons-892214                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-892214 --wait=true                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC | 02 Aug 24 17:30 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:30 UTC | 02 Aug 24 17:30 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-892214 ssh cat                                                                       | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:30 UTC | 02 Aug 24 17:30 UTC |
	|         | /opt/local-path-provisioner/pvc-a1b79ae1-93e6-47b1-8e06-9a59fcccfc8d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:30 UTC | 02 Aug 24 17:30 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-892214 ip                                                                            | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | addons-892214                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-892214 ssh curl -s                                                                   | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | -p addons-892214                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | addons-892214                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | -p addons-892214                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892214 addons                                                                        | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892214 addons                                                                        | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:31 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:31 UTC | 02 Aug 24 17:32 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-892214 ip                                                                            | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:33 UTC | 02 Aug 24 17:33 UTC |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:33 UTC | 02 Aug 24 17:33 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-892214 addons disable                                                                | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:33 UTC | 02 Aug 24 17:33 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-892214 addons                                                                        | addons-892214        | jenkins | v1.33.1 | 02 Aug 24 17:36 UTC | 02 Aug 24 17:36 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:27:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:27:50.976202   13693 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:27:50.976308   13693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:27:50.976316   13693 out.go:304] Setting ErrFile to fd 2...
	I0802 17:27:50.976321   13693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:27:50.976506   13693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:27:50.977099   13693 out.go:298] Setting JSON to false
	I0802 17:27:50.977860   13693 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":615,"bootTime":1722619056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:27:50.977913   13693 start.go:139] virtualization: kvm guest
	I0802 17:27:50.979963   13693 out.go:177] * [addons-892214] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:27:50.981185   13693 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 17:27:50.981207   13693 notify.go:220] Checking for updates...
	I0802 17:27:50.983312   13693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:27:50.984457   13693 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:27:50.985530   13693 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:27:50.986544   13693 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 17:27:50.987742   13693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 17:27:50.989084   13693 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:27:51.020982   13693 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 17:27:51.022136   13693 start.go:297] selected driver: kvm2
	I0802 17:27:51.022149   13693 start.go:901] validating driver "kvm2" against <nil>
	I0802 17:27:51.022160   13693 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 17:27:51.022853   13693 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:27:51.022943   13693 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:27:51.037250   13693 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:27:51.037316   13693 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 17:27:51.037623   13693 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:27:51.037661   13693 cni.go:84] Creating CNI manager for ""
	I0802 17:27:51.037672   13693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 17:27:51.037681   13693 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 17:27:51.037757   13693 start.go:340] cluster config:
	{Name:addons-892214 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-892214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:27:51.037880   13693 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:27:51.040461   13693 out.go:177] * Starting "addons-892214" primary control-plane node in "addons-892214" cluster
	I0802 17:27:51.041495   13693 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:27:51.041523   13693 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 17:27:51.041532   13693 cache.go:56] Caching tarball of preloaded images
	I0802 17:27:51.041603   13693 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 17:27:51.041616   13693 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 17:27:51.041903   13693 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/config.json ...
	I0802 17:27:51.041924   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/config.json: {Name:mkec90184a2a49bfc6d18b2bafcf782d87496a0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:51.042063   13693 start.go:360] acquireMachinesLock for addons-892214: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 17:27:51.042113   13693 start.go:364] duration metric: took 34.882µs to acquireMachinesLock for "addons-892214"
	I0802 17:27:51.042131   13693 start.go:93] Provisioning new machine with config: &{Name:addons-892214 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-892214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:27:51.042183   13693 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 17:27:51.043720   13693 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0802 17:27:51.043838   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:27:51.043877   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:27:51.057654   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I0802 17:27:51.058071   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:27:51.058588   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:27:51.058612   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:27:51.058933   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:27:51.059121   13693 main.go:141] libmachine: (addons-892214) Calling .GetMachineName
	I0802 17:27:51.059264   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:27:51.059410   13693 start.go:159] libmachine.API.Create for "addons-892214" (driver="kvm2")
	I0802 17:27:51.059441   13693 client.go:168] LocalClient.Create starting
	I0802 17:27:51.059487   13693 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 17:27:51.210796   13693 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 17:27:51.290041   13693 main.go:141] libmachine: Running pre-create checks...
	I0802 17:27:51.290064   13693 main.go:141] libmachine: (addons-892214) Calling .PreCreateCheck
	I0802 17:27:51.290548   13693 main.go:141] libmachine: (addons-892214) Calling .GetConfigRaw
	I0802 17:27:51.290977   13693 main.go:141] libmachine: Creating machine...
	I0802 17:27:51.290991   13693 main.go:141] libmachine: (addons-892214) Calling .Create
	I0802 17:27:51.291142   13693 main.go:141] libmachine: (addons-892214) Creating KVM machine...
	I0802 17:27:51.292350   13693 main.go:141] libmachine: (addons-892214) DBG | found existing default KVM network
	I0802 17:27:51.293058   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:51.292936   13717 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0802 17:27:51.293084   13693 main.go:141] libmachine: (addons-892214) DBG | created network xml: 
	I0802 17:27:51.293097   13693 main.go:141] libmachine: (addons-892214) DBG | <network>
	I0802 17:27:51.293105   13693 main.go:141] libmachine: (addons-892214) DBG |   <name>mk-addons-892214</name>
	I0802 17:27:51.293110   13693 main.go:141] libmachine: (addons-892214) DBG |   <dns enable='no'/>
	I0802 17:27:51.293118   13693 main.go:141] libmachine: (addons-892214) DBG |   
	I0802 17:27:51.293124   13693 main.go:141] libmachine: (addons-892214) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0802 17:27:51.293130   13693 main.go:141] libmachine: (addons-892214) DBG |     <dhcp>
	I0802 17:27:51.293136   13693 main.go:141] libmachine: (addons-892214) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0802 17:27:51.293143   13693 main.go:141] libmachine: (addons-892214) DBG |     </dhcp>
	I0802 17:27:51.293147   13693 main.go:141] libmachine: (addons-892214) DBG |   </ip>
	I0802 17:27:51.293152   13693 main.go:141] libmachine: (addons-892214) DBG |   
	I0802 17:27:51.293156   13693 main.go:141] libmachine: (addons-892214) DBG | </network>
	I0802 17:27:51.293162   13693 main.go:141] libmachine: (addons-892214) DBG | 
	I0802 17:27:51.298606   13693 main.go:141] libmachine: (addons-892214) DBG | trying to create private KVM network mk-addons-892214 192.168.39.0/24...
	I0802 17:27:51.359557   13693 main.go:141] libmachine: (addons-892214) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214 ...
	I0802 17:27:51.359591   13693 main.go:141] libmachine: (addons-892214) DBG | private KVM network mk-addons-892214 192.168.39.0/24 created
	I0802 17:27:51.359611   13693 main.go:141] libmachine: (addons-892214) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 17:27:51.359633   13693 main.go:141] libmachine: (addons-892214) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 17:27:51.359666   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:51.359418   13717 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:27:51.622976   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:51.622868   13717 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa...
	I0802 17:27:51.675282   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:51.675158   13717 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/addons-892214.rawdisk...
	I0802 17:27:51.675303   13693 main.go:141] libmachine: (addons-892214) DBG | Writing magic tar header
	I0802 17:27:51.675313   13693 main.go:141] libmachine: (addons-892214) DBG | Writing SSH key tar header
	I0802 17:27:51.675320   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:51.675287   13717 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214 ...
	I0802 17:27:51.675396   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214
	I0802 17:27:51.675419   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214 (perms=drwx------)
	I0802 17:27:51.675430   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 17:27:51.675441   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 17:27:51.675459   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 17:27:51.675475   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 17:27:51.675490   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 17:27:51.675513   13693 main.go:141] libmachine: (addons-892214) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 17:27:51.675531   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:27:51.675542   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 17:27:51.675549   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 17:27:51.675556   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home/jenkins
	I0802 17:27:51.675563   13693 main.go:141] libmachine: (addons-892214) DBG | Checking permissions on dir: /home
	I0802 17:27:51.675569   13693 main.go:141] libmachine: (addons-892214) DBG | Skipping /home - not owner
	I0802 17:27:51.675610   13693 main.go:141] libmachine: (addons-892214) Creating domain...
	I0802 17:27:51.676444   13693 main.go:141] libmachine: (addons-892214) define libvirt domain using xml: 
	I0802 17:27:51.676468   13693 main.go:141] libmachine: (addons-892214) <domain type='kvm'>
	I0802 17:27:51.676488   13693 main.go:141] libmachine: (addons-892214)   <name>addons-892214</name>
	I0802 17:27:51.676504   13693 main.go:141] libmachine: (addons-892214)   <memory unit='MiB'>4000</memory>
	I0802 17:27:51.676517   13693 main.go:141] libmachine: (addons-892214)   <vcpu>2</vcpu>
	I0802 17:27:51.676524   13693 main.go:141] libmachine: (addons-892214)   <features>
	I0802 17:27:51.676532   13693 main.go:141] libmachine: (addons-892214)     <acpi/>
	I0802 17:27:51.676538   13693 main.go:141] libmachine: (addons-892214)     <apic/>
	I0802 17:27:51.676543   13693 main.go:141] libmachine: (addons-892214)     <pae/>
	I0802 17:27:51.676550   13693 main.go:141] libmachine: (addons-892214)     
	I0802 17:27:51.676555   13693 main.go:141] libmachine: (addons-892214)   </features>
	I0802 17:27:51.676565   13693 main.go:141] libmachine: (addons-892214)   <cpu mode='host-passthrough'>
	I0802 17:27:51.676576   13693 main.go:141] libmachine: (addons-892214)   
	I0802 17:27:51.676590   13693 main.go:141] libmachine: (addons-892214)   </cpu>
	I0802 17:27:51.676609   13693 main.go:141] libmachine: (addons-892214)   <os>
	I0802 17:27:51.676619   13693 main.go:141] libmachine: (addons-892214)     <type>hvm</type>
	I0802 17:27:51.676627   13693 main.go:141] libmachine: (addons-892214)     <boot dev='cdrom'/>
	I0802 17:27:51.676632   13693 main.go:141] libmachine: (addons-892214)     <boot dev='hd'/>
	I0802 17:27:51.676640   13693 main.go:141] libmachine: (addons-892214)     <bootmenu enable='no'/>
	I0802 17:27:51.676646   13693 main.go:141] libmachine: (addons-892214)   </os>
	I0802 17:27:51.676658   13693 main.go:141] libmachine: (addons-892214)   <devices>
	I0802 17:27:51.676672   13693 main.go:141] libmachine: (addons-892214)     <disk type='file' device='cdrom'>
	I0802 17:27:51.676694   13693 main.go:141] libmachine: (addons-892214)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/boot2docker.iso'/>
	I0802 17:27:51.676705   13693 main.go:141] libmachine: (addons-892214)       <target dev='hdc' bus='scsi'/>
	I0802 17:27:51.676716   13693 main.go:141] libmachine: (addons-892214)       <readonly/>
	I0802 17:27:51.676724   13693 main.go:141] libmachine: (addons-892214)     </disk>
	I0802 17:27:51.676733   13693 main.go:141] libmachine: (addons-892214)     <disk type='file' device='disk'>
	I0802 17:27:51.676745   13693 main.go:141] libmachine: (addons-892214)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 17:27:51.676760   13693 main.go:141] libmachine: (addons-892214)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/addons-892214.rawdisk'/>
	I0802 17:27:51.676773   13693 main.go:141] libmachine: (addons-892214)       <target dev='hda' bus='virtio'/>
	I0802 17:27:51.676782   13693 main.go:141] libmachine: (addons-892214)     </disk>
	I0802 17:27:51.676794   13693 main.go:141] libmachine: (addons-892214)     <interface type='network'>
	I0802 17:27:51.676805   13693 main.go:141] libmachine: (addons-892214)       <source network='mk-addons-892214'/>
	I0802 17:27:51.676814   13693 main.go:141] libmachine: (addons-892214)       <model type='virtio'/>
	I0802 17:27:51.676820   13693 main.go:141] libmachine: (addons-892214)     </interface>
	I0802 17:27:51.676826   13693 main.go:141] libmachine: (addons-892214)     <interface type='network'>
	I0802 17:27:51.676832   13693 main.go:141] libmachine: (addons-892214)       <source network='default'/>
	I0802 17:27:51.676839   13693 main.go:141] libmachine: (addons-892214)       <model type='virtio'/>
	I0802 17:27:51.676849   13693 main.go:141] libmachine: (addons-892214)     </interface>
	I0802 17:27:51.676857   13693 main.go:141] libmachine: (addons-892214)     <serial type='pty'>
	I0802 17:27:51.676862   13693 main.go:141] libmachine: (addons-892214)       <target port='0'/>
	I0802 17:27:51.676868   13693 main.go:141] libmachine: (addons-892214)     </serial>
	I0802 17:27:51.676874   13693 main.go:141] libmachine: (addons-892214)     <console type='pty'>
	I0802 17:27:51.676883   13693 main.go:141] libmachine: (addons-892214)       <target type='serial' port='0'/>
	I0802 17:27:51.676889   13693 main.go:141] libmachine: (addons-892214)     </console>
	I0802 17:27:51.676898   13693 main.go:141] libmachine: (addons-892214)     <rng model='virtio'>
	I0802 17:27:51.676905   13693 main.go:141] libmachine: (addons-892214)       <backend model='random'>/dev/random</backend>
	I0802 17:27:51.676910   13693 main.go:141] libmachine: (addons-892214)     </rng>
	I0802 17:27:51.676916   13693 main.go:141] libmachine: (addons-892214)     
	I0802 17:27:51.676921   13693 main.go:141] libmachine: (addons-892214)     
	I0802 17:27:51.676931   13693 main.go:141] libmachine: (addons-892214)   </devices>
	I0802 17:27:51.676939   13693 main.go:141] libmachine: (addons-892214) </domain>
	I0802 17:27:51.676948   13693 main.go:141] libmachine: (addons-892214) 
	I0802 17:27:51.682787   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:b4:62:db in network default
	I0802 17:27:51.683312   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:51.683376   13693 main.go:141] libmachine: (addons-892214) Ensuring networks are active...
	I0802 17:27:51.683853   13693 main.go:141] libmachine: (addons-892214) Ensuring network default is active
	I0802 17:27:51.684100   13693 main.go:141] libmachine: (addons-892214) Ensuring network mk-addons-892214 is active
	I0802 17:27:51.684697   13693 main.go:141] libmachine: (addons-892214) Getting domain xml...
	I0802 17:27:51.685222   13693 main.go:141] libmachine: (addons-892214) Creating domain...
	I0802 17:27:53.057283   13693 main.go:141] libmachine: (addons-892214) Waiting to get IP...
	I0802 17:27:53.058078   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:53.058437   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:53.058462   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:53.058423   13717 retry.go:31] will retry after 253.172901ms: waiting for machine to come up
	I0802 17:27:53.312747   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:53.313205   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:53.313228   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:53.313153   13717 retry.go:31] will retry after 330.782601ms: waiting for machine to come up
	I0802 17:27:53.645740   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:53.646084   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:53.646150   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:53.646096   13717 retry.go:31] will retry after 324.585239ms: waiting for machine to come up
	I0802 17:27:53.972530   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:53.973026   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:53.973094   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:53.972981   13717 retry.go:31] will retry after 430.438542ms: waiting for machine to come up
	I0802 17:27:54.404565   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:54.404999   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:54.405034   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:54.404956   13717 retry.go:31] will retry after 479.7052ms: waiting for machine to come up
	I0802 17:27:54.886623   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:54.887000   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:54.887029   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:54.886952   13717 retry.go:31] will retry after 689.858544ms: waiting for machine to come up
	I0802 17:27:55.578832   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:55.579152   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:55.579176   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:55.579132   13717 retry.go:31] will retry after 893.166889ms: waiting for machine to come up
	I0802 17:27:56.473790   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:56.474224   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:56.474249   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:56.474184   13717 retry.go:31] will retry after 1.160354236s: waiting for machine to come up
	I0802 17:27:57.636582   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:57.636997   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:57.637029   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:57.636947   13717 retry.go:31] will retry after 1.777622896s: waiting for machine to come up
	I0802 17:27:59.416754   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:27:59.417056   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:27:59.417077   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:27:59.417028   13717 retry.go:31] will retry after 1.803146036s: waiting for machine to come up
	I0802 17:28:01.221891   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:01.222284   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:28:01.222314   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:28:01.222220   13717 retry.go:31] will retry after 2.502803711s: waiting for machine to come up
	I0802 17:28:03.727863   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:03.728196   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:28:03.728220   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:28:03.728138   13717 retry.go:31] will retry after 2.760974284s: waiting for machine to come up
	I0802 17:28:06.490248   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:06.490596   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:28:06.490620   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:28:06.490547   13717 retry.go:31] will retry after 2.805071087s: waiting for machine to come up
	I0802 17:28:09.299439   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:09.299759   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find current IP address of domain addons-892214 in network mk-addons-892214
	I0802 17:28:09.299788   13693 main.go:141] libmachine: (addons-892214) DBG | I0802 17:28:09.299713   13717 retry.go:31] will retry after 5.09623066s: waiting for machine to come up
	I0802 17:28:14.399714   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.400052   13693 main.go:141] libmachine: (addons-892214) Found IP for machine: 192.168.39.4
	I0802 17:28:14.400081   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has current primary IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.400091   13693 main.go:141] libmachine: (addons-892214) Reserving static IP address...
	I0802 17:28:14.400355   13693 main.go:141] libmachine: (addons-892214) DBG | unable to find host DHCP lease matching {name: "addons-892214", mac: "52:54:00:00:90:54", ip: "192.168.39.4"} in network mk-addons-892214
	I0802 17:28:14.468033   13693 main.go:141] libmachine: (addons-892214) DBG | Getting to WaitForSSH function...
	I0802 17:28:14.468059   13693 main.go:141] libmachine: (addons-892214) Reserved static IP address: 192.168.39.4
	I0802 17:28:14.468072   13693 main.go:141] libmachine: (addons-892214) Waiting for SSH to be available...
	I0802 17:28:14.470508   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.471044   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:90:54}
	I0802 17:28:14.471064   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.471327   13693 main.go:141] libmachine: (addons-892214) DBG | Using SSH client type: external
	I0802 17:28:14.471341   13693 main.go:141] libmachine: (addons-892214) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa (-rw-------)
	I0802 17:28:14.471356   13693 main.go:141] libmachine: (addons-892214) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 17:28:14.471364   13693 main.go:141] libmachine: (addons-892214) DBG | About to run SSH command:
	I0802 17:28:14.471373   13693 main.go:141] libmachine: (addons-892214) DBG | exit 0
	I0802 17:28:14.603145   13693 main.go:141] libmachine: (addons-892214) DBG | SSH cmd err, output: <nil>: 
	I0802 17:28:14.603397   13693 main.go:141] libmachine: (addons-892214) KVM machine creation complete!
	I0802 17:28:14.603720   13693 main.go:141] libmachine: (addons-892214) Calling .GetConfigRaw
	I0802 17:28:14.604182   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:14.604372   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:14.604534   13693 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 17:28:14.604556   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:14.605815   13693 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 17:28:14.605832   13693 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 17:28:14.605840   13693 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 17:28:14.605847   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:14.608094   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.608410   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:14.608435   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.608536   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:14.608689   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.608840   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.608934   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:14.609089   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:14.609275   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:14.609286   13693 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 17:28:14.710218   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:28:14.710242   13693 main.go:141] libmachine: Detecting the provisioner...
	I0802 17:28:14.710252   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:14.712634   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.712891   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:14.712916   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.713010   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:14.713157   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.713282   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.713404   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:14.713548   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:14.713703   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:14.713713   13693 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 17:28:14.811425   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 17:28:14.811491   13693 main.go:141] libmachine: found compatible host: buildroot
	I0802 17:28:14.811498   13693 main.go:141] libmachine: Provisioning with buildroot...
	I0802 17:28:14.811505   13693 main.go:141] libmachine: (addons-892214) Calling .GetMachineName
	I0802 17:28:14.811751   13693 buildroot.go:166] provisioning hostname "addons-892214"
	I0802 17:28:14.811781   13693 main.go:141] libmachine: (addons-892214) Calling .GetMachineName
	I0802 17:28:14.812098   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:14.814230   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.814571   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:14.814602   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.814753   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:14.814953   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.815232   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.815375   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:14.815569   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:14.815771   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:14.815788   13693 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-892214 && echo "addons-892214" | sudo tee /etc/hostname
	I0802 17:28:14.928650   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-892214
	
	I0802 17:28:14.928674   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:14.931179   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.931566   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:14.931593   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:14.931776   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:14.931975   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.932140   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:14.932270   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:14.932399   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:14.932548   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:14.932564   13693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-892214' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-892214/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-892214' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 17:28:15.039049   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:28:15.039074   13693 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 17:28:15.039128   13693 buildroot.go:174] setting up certificates
	I0802 17:28:15.039146   13693 provision.go:84] configureAuth start
	I0802 17:28:15.039161   13693 main.go:141] libmachine: (addons-892214) Calling .GetMachineName
	I0802 17:28:15.039405   13693 main.go:141] libmachine: (addons-892214) Calling .GetIP
	I0802 17:28:15.041864   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.042167   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.042188   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.042314   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.044301   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.044641   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.044664   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.044772   13693 provision.go:143] copyHostCerts
	I0802 17:28:15.044852   13693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 17:28:15.044986   13693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 17:28:15.045117   13693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 17:28:15.045210   13693 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.addons-892214 san=[127.0.0.1 192.168.39.4 addons-892214 localhost minikube]
	I0802 17:28:15.276127   13693 provision.go:177] copyRemoteCerts
	I0802 17:28:15.276189   13693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 17:28:15.276210   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.278638   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.278875   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.278918   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.279091   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.279302   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.279464   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.279629   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:15.360956   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0802 17:28:15.382411   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 17:28:15.403516   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 17:28:15.424081   13693 provision.go:87] duration metric: took 384.916003ms to configureAuth
	I0802 17:28:15.424106   13693 buildroot.go:189] setting minikube options for container-runtime
	I0802 17:28:15.424295   13693 config.go:182] Loaded profile config "addons-892214": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:28:15.424381   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.426788   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.427143   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.427169   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.427306   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.427506   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.427681   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.427793   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.427967   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:15.428113   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:15.428134   13693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 17:28:15.680103   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 17:28:15.680128   13693 main.go:141] libmachine: Checking connection to Docker...
	I0802 17:28:15.680139   13693 main.go:141] libmachine: (addons-892214) Calling .GetURL
	I0802 17:28:15.681365   13693 main.go:141] libmachine: (addons-892214) DBG | Using libvirt version 6000000
	I0802 17:28:15.683436   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.683797   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.683826   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.683980   13693 main.go:141] libmachine: Docker is up and running!
	I0802 17:28:15.683992   13693 main.go:141] libmachine: Reticulating splines...
	I0802 17:28:15.683999   13693 client.go:171] duration metric: took 24.624550565s to LocalClient.Create
	I0802 17:28:15.684019   13693 start.go:167] duration metric: took 24.624611357s to libmachine.API.Create "addons-892214"
	I0802 17:28:15.684029   13693 start.go:293] postStartSetup for "addons-892214" (driver="kvm2")
	I0802 17:28:15.684048   13693 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 17:28:15.684064   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:15.684287   13693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 17:28:15.684309   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.686178   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.686471   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.686500   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.686623   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.686789   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.686926   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.687062   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:15.764883   13693 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 17:28:15.768763   13693 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 17:28:15.768789   13693 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 17:28:15.768867   13693 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 17:28:15.768892   13693 start.go:296] duration metric: took 84.849247ms for postStartSetup
	I0802 17:28:15.768925   13693 main.go:141] libmachine: (addons-892214) Calling .GetConfigRaw
	I0802 17:28:15.769446   13693 main.go:141] libmachine: (addons-892214) Calling .GetIP
	I0802 17:28:15.771664   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.771936   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.771967   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.772135   13693 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/config.json ...
	I0802 17:28:15.772344   13693 start.go:128] duration metric: took 24.730151057s to createHost
	I0802 17:28:15.772383   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.774276   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.774570   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.774599   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.774743   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.774898   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.775065   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.775226   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.775368   13693 main.go:141] libmachine: Using SSH client type: native
	I0802 17:28:15.775528   13693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0802 17:28:15.775537   13693 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 17:28:15.875435   13693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722619695.853672833
	
	I0802 17:28:15.875457   13693 fix.go:216] guest clock: 1722619695.853672833
	I0802 17:28:15.875465   13693 fix.go:229] Guest: 2024-08-02 17:28:15.853672833 +0000 UTC Remote: 2024-08-02 17:28:15.772370386 +0000 UTC m=+24.827229333 (delta=81.302447ms)
	I0802 17:28:15.875498   13693 fix.go:200] guest clock delta is within tolerance: 81.302447ms
	I0802 17:28:15.875505   13693 start.go:83] releasing machines lock for "addons-892214", held for 24.833381788s
	I0802 17:28:15.875532   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:15.875801   13693 main.go:141] libmachine: (addons-892214) Calling .GetIP
	I0802 17:28:15.878641   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.879541   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.879571   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.879699   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:15.880133   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:15.880300   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:15.880362   13693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 17:28:15.880408   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.880511   13693 ssh_runner.go:195] Run: cat /version.json
	I0802 17:28:15.880533   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:15.883178   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.883203   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.883465   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.883491   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.883518   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:15.883536   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:15.883576   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.883770   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.883778   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:15.883950   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:15.883958   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.884110   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:15.884130   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:15.884269   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:16.004546   13693 ssh_runner.go:195] Run: systemctl --version
	I0802 17:28:16.010175   13693 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 17:28:16.173856   13693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 17:28:16.179031   13693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 17:28:16.179121   13693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 17:28:16.193265   13693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 17:28:16.193288   13693 start.go:495] detecting cgroup driver to use...
	I0802 17:28:16.193349   13693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 17:28:16.210050   13693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:28:16.221927   13693 docker.go:217] disabling cri-docker service (if available) ...
	I0802 17:28:16.221979   13693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 17:28:16.234221   13693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 17:28:16.246494   13693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 17:28:16.356545   13693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 17:28:16.496079   13693 docker.go:233] disabling docker service ...
	I0802 17:28:16.496160   13693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 17:28:16.509227   13693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 17:28:16.521654   13693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 17:28:16.652010   13693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 17:28:16.761180   13693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 17:28:16.773772   13693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:28:16.791168   13693 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 17:28:16.791230   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.800743   13693 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 17:28:16.800829   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.810231   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.819535   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.828995   13693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 17:28:16.838607   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.848046   13693 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.863491   13693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:28:16.872711   13693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 17:28:16.881407   13693 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 17:28:16.881451   13693 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 17:28:16.892601   13693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 17:28:16.901360   13693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:28:17.003241   13693 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 17:28:17.132080   13693 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 17:28:17.132177   13693 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 17:28:17.136186   13693 start.go:563] Will wait 60s for crictl version
	I0802 17:28:17.136248   13693 ssh_runner.go:195] Run: which crictl
	I0802 17:28:17.139421   13693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 17:28:17.174482   13693 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 17:28:17.174606   13693 ssh_runner.go:195] Run: crio --version
	I0802 17:28:17.200899   13693 ssh_runner.go:195] Run: crio --version
	I0802 17:28:17.228286   13693 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 17:28:17.229513   13693 main.go:141] libmachine: (addons-892214) Calling .GetIP
	I0802 17:28:17.232106   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:17.232416   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:17.232447   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:17.232635   13693 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 17:28:17.236413   13693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:28:17.247890   13693 kubeadm.go:883] updating cluster {Name:addons-892214 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-892214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 17:28:17.248027   13693 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:28:17.248091   13693 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:28:17.278350   13693 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 17:28:17.278422   13693 ssh_runner.go:195] Run: which lz4
	I0802 17:28:17.281995   13693 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 17:28:17.285816   13693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 17:28:17.285846   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 17:28:18.400521   13693 crio.go:462] duration metric: took 1.118561309s to copy over tarball
	I0802 17:28:18.400588   13693 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 17:28:20.618059   13693 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.217439698s)
	I0802 17:28:20.618091   13693 crio.go:469] duration metric: took 2.217545642s to extract the tarball
	I0802 17:28:20.618098   13693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 17:28:20.654721   13693 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:28:20.693160   13693 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 17:28:20.693181   13693 cache_images.go:84] Images are preloaded, skipping loading
	I0802 17:28:20.693188   13693 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.30.3 crio true true} ...
	I0802 17:28:20.693281   13693 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-892214 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-892214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 17:28:20.693344   13693 ssh_runner.go:195] Run: crio config
	I0802 17:28:20.738842   13693 cni.go:84] Creating CNI manager for ""
	I0802 17:28:20.738866   13693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 17:28:20.738878   13693 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 17:28:20.738917   13693 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-892214 NodeName:addons-892214 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 17:28:20.739058   13693 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-892214"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 17:28:20.739127   13693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 17:28:20.748102   13693 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 17:28:20.748170   13693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 17:28:20.756804   13693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0802 17:28:20.772147   13693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 17:28:20.789966   13693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0802 17:28:20.807986   13693 ssh_runner.go:195] Run: grep 192.168.39.4	control-plane.minikube.internal$ /etc/hosts
	I0802 17:28:20.811612   13693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:28:20.822697   13693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:28:20.966427   13693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:28:20.983220   13693 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214 for IP: 192.168.39.4
	I0802 17:28:20.983239   13693 certs.go:194] generating shared ca certs ...
	I0802 17:28:20.983259   13693 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:20.983400   13693 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 17:28:21.056606   13693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt ...
	I0802 17:28:21.056633   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt: {Name:mk7f2c81f05a97dea4ed48c16c19f59235c98d20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.056811   13693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key ...
	I0802 17:28:21.056827   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key: {Name:mk3b486491520ba40a02b021ce755433ce8d0de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.056923   13693 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 17:28:21.286187   13693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt ...
	I0802 17:28:21.286224   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt: {Name:mkb20879e4d0347acb03a2cb528decfd19f1525d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.286438   13693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key ...
	I0802 17:28:21.286456   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key: {Name:mk5d8ccb4c0b21bba1534a9aa4c7e6d10b5e11e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.286567   13693 certs.go:256] generating profile certs ...
	I0802 17:28:21.286641   13693 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.key
	I0802 17:28:21.286668   13693 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt with IP's: []
	I0802 17:28:21.372555   13693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt ...
	I0802 17:28:21.372593   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: {Name:mk9153b58d05737bb3729486a9de5259d8b40218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.372787   13693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.key ...
	I0802 17:28:21.372803   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.key: {Name:mka8c048727605d8e0f9e1df6d4be86275965409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.372910   13693 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key.a94bcccf
	I0802 17:28:21.372945   13693 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt.a94bcccf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4]
	I0802 17:28:21.767436   13693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt.a94bcccf ...
	I0802 17:28:21.767470   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt.a94bcccf: {Name:mk4d86360581729e088f3e659727b5d1fbd4296f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.767632   13693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key.a94bcccf ...
	I0802 17:28:21.767646   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key.a94bcccf: {Name:mk7653b5966696f07abb50c8f3ffb9a775b79ff0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.767716   13693 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt.a94bcccf -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt
	I0802 17:28:21.767790   13693 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key.a94bcccf -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key
	I0802 17:28:21.767834   13693 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.key
	I0802 17:28:21.767852   13693 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.crt with IP's: []
	I0802 17:28:21.934875   13693 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.crt ...
	I0802 17:28:21.934908   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.crt: {Name:mk8ac06a3eac335a09fee7d690e1936ed369e3dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.935067   13693 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.key ...
	I0802 17:28:21.935077   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.key: {Name:mk4aadd41b1e90e47581c1d4d731e1d3b3bf970f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:21.935275   13693 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 17:28:21.935308   13693 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 17:28:21.935344   13693 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 17:28:21.935367   13693 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 17:28:21.935900   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 17:28:21.958757   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 17:28:21.980135   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 17:28:22.000723   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 17:28:22.021394   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0802 17:28:22.042084   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 17:28:22.062828   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 17:28:22.086261   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 17:28:22.109090   13693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 17:28:22.131632   13693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 17:28:22.146735   13693 ssh_runner.go:195] Run: openssl version
	I0802 17:28:22.152267   13693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 17:28:22.161654   13693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:28:22.165510   13693 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:28:22.165572   13693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:28:22.170949   13693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 17:28:22.180346   13693 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 17:28:22.183818   13693 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 17:28:22.183870   13693 kubeadm.go:392] StartCluster: {Name:addons-892214 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-892214 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:28:22.183979   13693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 17:28:22.184020   13693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 17:28:22.217533   13693 cri.go:89] found id: ""
	I0802 17:28:22.217627   13693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 17:28:22.226797   13693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 17:28:22.235428   13693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 17:28:22.243746   13693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 17:28:22.243768   13693 kubeadm.go:157] found existing configuration files:
	
	I0802 17:28:22.243818   13693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 17:28:22.251773   13693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 17:28:22.251830   13693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 17:28:22.260184   13693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 17:28:22.268018   13693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 17:28:22.268065   13693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 17:28:22.276195   13693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 17:28:22.283962   13693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 17:28:22.284016   13693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 17:28:22.292229   13693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 17:28:22.299998   13693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 17:28:22.300043   13693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 17:28:22.308216   13693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 17:28:22.504913   13693 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 17:28:32.343249   13693 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 17:28:32.343323   13693 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 17:28:32.343406   13693 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 17:28:32.343583   13693 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 17:28:32.343713   13693 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 17:28:32.343804   13693 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 17:28:32.345280   13693 out.go:204]   - Generating certificates and keys ...
	I0802 17:28:32.345391   13693 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 17:28:32.345467   13693 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 17:28:32.345559   13693 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 17:28:32.345647   13693 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 17:28:32.345714   13693 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 17:28:32.345771   13693 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 17:28:32.345825   13693 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 17:28:32.345935   13693 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-892214 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0802 17:28:32.346021   13693 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 17:28:32.346156   13693 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-892214 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0802 17:28:32.346254   13693 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 17:28:32.346354   13693 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 17:28:32.346423   13693 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 17:28:32.346505   13693 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 17:28:32.346588   13693 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 17:28:32.346680   13693 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 17:28:32.346730   13693 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 17:28:32.346787   13693 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 17:28:32.346832   13693 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 17:28:32.346898   13693 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 17:28:32.346955   13693 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 17:28:32.348284   13693 out.go:204]   - Booting up control plane ...
	I0802 17:28:32.348363   13693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 17:28:32.348439   13693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 17:28:32.348495   13693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 17:28:32.348583   13693 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 17:28:32.348691   13693 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 17:28:32.348733   13693 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 17:28:32.348840   13693 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 17:28:32.348903   13693 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 17:28:32.348957   13693 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.272826ms
	I0802 17:28:32.349045   13693 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 17:28:32.349127   13693 kubeadm.go:310] [api-check] The API server is healthy after 5.0012416s
	I0802 17:28:32.349253   13693 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 17:28:32.349396   13693 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 17:28:32.349452   13693 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 17:28:32.349620   13693 kubeadm.go:310] [mark-control-plane] Marking the node addons-892214 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 17:28:32.349717   13693 kubeadm.go:310] [bootstrap-token] Using token: zy0nf3.h41pvfnv7qqy1skc
	I0802 17:28:32.350922   13693 out.go:204]   - Configuring RBAC rules ...
	I0802 17:28:32.351024   13693 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 17:28:32.351155   13693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 17:28:32.351313   13693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 17:28:32.351459   13693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 17:28:32.351623   13693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 17:28:32.351739   13693 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 17:28:32.351862   13693 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 17:28:32.351929   13693 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 17:28:32.351987   13693 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 17:28:32.351996   13693 kubeadm.go:310] 
	I0802 17:28:32.352080   13693 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 17:28:32.352091   13693 kubeadm.go:310] 
	I0802 17:28:32.352170   13693 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 17:28:32.352176   13693 kubeadm.go:310] 
	I0802 17:28:32.352203   13693 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 17:28:32.352252   13693 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 17:28:32.352303   13693 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 17:28:32.352309   13693 kubeadm.go:310] 
	I0802 17:28:32.352352   13693 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 17:28:32.352357   13693 kubeadm.go:310] 
	I0802 17:28:32.352431   13693 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 17:28:32.352445   13693 kubeadm.go:310] 
	I0802 17:28:32.352530   13693 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 17:28:32.352597   13693 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 17:28:32.352685   13693 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 17:28:32.352693   13693 kubeadm.go:310] 
	I0802 17:28:32.352807   13693 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 17:28:32.352912   13693 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 17:28:32.352921   13693 kubeadm.go:310] 
	I0802 17:28:32.353027   13693 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zy0nf3.h41pvfnv7qqy1skc \
	I0802 17:28:32.353148   13693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 17:28:32.353178   13693 kubeadm.go:310] 	--control-plane 
	I0802 17:28:32.353186   13693 kubeadm.go:310] 
	I0802 17:28:32.353301   13693 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 17:28:32.353310   13693 kubeadm.go:310] 
	I0802 17:28:32.353422   13693 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zy0nf3.h41pvfnv7qqy1skc \
	I0802 17:28:32.353564   13693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 17:28:32.353578   13693 cni.go:84] Creating CNI manager for ""
	I0802 17:28:32.353586   13693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 17:28:32.354926   13693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 17:28:32.356125   13693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 17:28:32.365893   13693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 17:28:32.382705   13693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 17:28:32.382770   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:32.382791   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-892214 minikube.k8s.io/updated_at=2024_08_02T17_28_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=addons-892214 minikube.k8s.io/primary=true
	I0802 17:28:32.410380   13693 ops.go:34] apiserver oom_adj: -16
	I0802 17:28:32.506457   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:33.006704   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:33.506661   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:34.006795   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:34.507010   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:35.007446   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:35.507523   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:36.006518   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:36.506988   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:37.007428   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:37.507387   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:38.006517   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:38.506538   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:39.007255   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:39.506981   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:40.007437   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:40.507292   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:41.007047   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:41.506743   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:42.007004   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:42.506819   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:43.007204   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:43.506873   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:44.007222   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:44.506801   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:45.006566   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:45.507172   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:46.007264   13693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:28:46.113047   13693 kubeadm.go:1113] duration metric: took 13.730333253s to wait for elevateKubeSystemPrivileges
	I0802 17:28:46.113085   13693 kubeadm.go:394] duration metric: took 23.929215755s to StartCluster
	I0802 17:28:46.113106   13693 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:46.113226   13693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:28:46.113576   13693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:28:46.113828   13693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 17:28:46.113829   13693 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:28:46.113875   13693 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0802 17:28:46.113984   13693 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-892214"
	I0802 17:28:46.114000   13693 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-892214"
	I0802 17:28:46.114003   13693 addons.go:69] Setting metrics-server=true in profile "addons-892214"
	I0802 17:28:46.114031   13693 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-892214"
	I0802 17:28:46.114038   13693 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-892214"
	I0802 17:28:46.114048   13693 addons.go:69] Setting ingress=true in profile "addons-892214"
	I0802 17:28:46.114039   13693 addons.go:69] Setting default-storageclass=true in profile "addons-892214"
	I0802 17:28:46.114060   13693 config.go:182] Loaded profile config "addons-892214": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:28:46.114075   13693 addons.go:69] Setting registry=true in profile "addons-892214"
	I0802 17:28:46.114078   13693 addons.go:69] Setting ingress-dns=true in profile "addons-892214"
	I0802 17:28:46.114078   13693 addons.go:69] Setting cloud-spanner=true in profile "addons-892214"
	I0802 17:28:46.114081   13693 addons.go:69] Setting inspektor-gadget=true in profile "addons-892214"
	I0802 17:28:46.114093   13693 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-892214"
	I0802 17:28:46.114103   13693 addons.go:234] Setting addon cloud-spanner=true in "addons-892214"
	I0802 17:28:46.114103   13693 addons.go:69] Setting volcano=true in profile "addons-892214"
	I0802 17:28:46.114106   13693 addons.go:69] Setting volumesnapshots=true in profile "addons-892214"
	I0802 17:28:46.114038   13693 addons.go:234] Setting addon metrics-server=true in "addons-892214"
	I0802 17:28:46.114116   13693 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-892214"
	I0802 17:28:46.114120   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114122   13693 addons.go:234] Setting addon volcano=true in "addons-892214"
	I0802 17:28:46.114125   13693 addons.go:234] Setting addon volumesnapshots=true in "addons-892214"
	I0802 17:28:46.114140   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114095   13693 addons.go:234] Setting addon registry=true in "addons-892214"
	I0802 17:28:46.114179   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114105   13693 addons.go:234] Setting addon inspektor-gadget=true in "addons-892214"
	I0802 17:28:46.114292   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114081   13693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-892214"
	I0802 17:28:46.114140   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114545   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114556   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114562   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114576   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114579   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114595   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114066   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114545   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114671   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114687   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114703   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114710   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114071   13693 addons.go:234] Setting addon ingress=true in "addons-892214"
	I0802 17:28:46.113988   13693 addons.go:69] Setting yakd=true in profile "addons-892214"
	I0802 17:28:46.114725   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114744   13693 addons.go:234] Setting addon yakd=true in "addons-892214"
	I0802 17:28:46.114068   13693 addons.go:69] Setting gcp-auth=true in profile "addons-892214"
	I0802 17:28:46.114039   13693 addons.go:69] Setting helm-tiller=true in profile "addons-892214"
	I0802 17:28:46.114776   13693 mustload.go:65] Loading cluster: addons-892214
	I0802 17:28:46.114095   13693 addons.go:234] Setting addon ingress-dns=true in "addons-892214"
	I0802 17:28:46.114148   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114796   13693 addons.go:234] Setting addon helm-tiller=true in "addons-892214"
	I0802 17:28:46.114110   13693 addons.go:69] Setting storage-provisioner=true in profile "addons-892214"
	I0802 17:28:46.114069   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114921   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.114942   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.114970   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.114894   13693 addons.go:234] Setting addon storage-provisioner=true in "addons-892214"
	I0802 17:28:46.115032   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.115096   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115138   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.115145   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.115211   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115231   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.115270   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115272   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115286   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.115295   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.115381   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.115418   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115456   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.115523   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.115563   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.116001   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.116324   13693 out.go:177] * Verifying Kubernetes components...
	I0802 17:28:46.116401   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.116422   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.127412   13693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:28:46.134986   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44247
	I0802 17:28:46.135355   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I0802 17:28:46.135534   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.135649   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0802 17:28:46.135885   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.136077   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.136095   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.136234   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.136376   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.136388   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.136444   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.136677   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.136702   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.136758   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.136928   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.136997   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.137368   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.137403   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.137536   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.137570   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.138247   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
	I0802 17:28:46.138676   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.139158   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.139177   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.139478   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.139992   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.140027   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.143478   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.143516   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.144682   13693 config.go:182] Loaded profile config "addons-892214": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:28:46.145019   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.145051   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.145589   13693 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-892214"
	I0802 17:28:46.145641   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.145997   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.146028   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.161117   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33651
	I0802 17:28:46.162229   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I0802 17:28:46.162811   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.162922   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I0802 17:28:46.165292   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39299
	I0802 17:28:46.165318   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
	I0802 17:28:46.165427   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.165785   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.165834   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.166266   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.166276   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.166284   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.166293   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.166612   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.166612   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.166797   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.167184   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.167202   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.167206   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.167236   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.167549   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.168127   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.168169   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.168409   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.168493   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.168512   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.168844   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.169190   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.169210   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.169224   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.169413   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.169453   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.169648   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.170204   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.170236   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.171217   13693 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0802 17:28:46.172787   13693 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0802 17:28:46.172811   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0802 17:28:46.172829   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.176064   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.176622   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.176650   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.176797   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.176960   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.177112   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.177235   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.189148   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42523
	I0802 17:28:46.189600   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.190087   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.190100   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.190385   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.190788   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.190811   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.191574   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38487
	I0802 17:28:46.191988   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.192126   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46227
	I0802 17:28:46.192532   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.192548   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.192605   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.192961   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.193684   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.193742   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
	I0802 17:28:46.193748   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.193764   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.194235   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.194700   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.194760   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.194774   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.195358   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.195412   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.195592   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.195649   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.195991   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.196026   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.196216   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.197741   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.199496   13693 out.go:177]   - Using image docker.io/registry:2.8.3
	I0802 17:28:46.200457   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I0802 17:28:46.201687   13693 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0802 17:28:46.202331   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.202610   13693 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0802 17:28:46.202625   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0802 17:28:46.202642   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.205280   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34463
	I0802 17:28:46.206430   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.206726   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.207256   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.207278   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.207329   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.207346   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.207986   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.208015   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.208048   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.208227   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.208271   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.209203   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.210019   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.210106   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.210382   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.210856   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.211693   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.211738   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.212095   13693 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0802 17:28:46.212413   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43253
	I0802 17:28:46.213032   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.213268   13693 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0802 17:28:46.213282   13693 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0802 17:28:46.213301   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.213809   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.213825   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.214172   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.214736   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.214775   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.215032   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I0802 17:28:46.215882   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0802 17:28:46.216286   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.216959   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45849
	I0802 17:28:46.217797   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.217813   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.218368   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36269
	I0802 17:28:46.218603   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.218710   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.218760   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.218799   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42883
	I0802 17:28:46.219081   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.219167   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.219563   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.219673   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.219699   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.219597   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.219719   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.219910   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.220067   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.220230   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.220390   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.220650   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.220867   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.220990   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.221003   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.221054   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.221330   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37967
	I0802 17:28:46.221347   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.221407   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:46.221415   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:46.221628   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.221648   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.221719   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.221776   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:46.221795   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:46.221803   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:46.221810   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:46.221816   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:46.221830   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.221850   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.221964   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:46.221986   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:46.221993   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	W0802 17:28:46.222066   13693 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0802 17:28:46.222134   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.222452   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.223552   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.223769   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.223827   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I0802 17:28:46.224142   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.224964   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.224989   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.225391   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.225419   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.225623   13693 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0802 17:28:46.225799   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.225819   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.225970   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.226004   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.226111   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.226130   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.226407   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.226976   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.227004   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.227175   13693 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0802 17:28:46.227931   13693 out.go:177]   - Using image docker.io/busybox:stable
	I0802 17:28:46.227396   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.229043   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40671
	I0802 17:28:46.229177   13693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0802 17:28:46.229192   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0802 17:28:46.229211   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.229278   13693 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0802 17:28:46.229286   13693 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0802 17:28:46.229299   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.230270   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.230816   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.230833   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.231282   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39653
	I0802 17:28:46.231391   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.231807   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.232124   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.233004   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.233843   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.233903   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.234161   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.234177   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.234207   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.234264   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.234278   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.234544   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.234558   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.234670   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35913
	I0802 17:28:46.234776   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.234818   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.234859   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.234925   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.234957   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.234990   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.235024   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.235320   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.235551   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.236297   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0802 17:28:46.236738   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.237224   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.237239   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.237291   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.237892   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.238110   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.238339   13693 addons.go:234] Setting addon default-storageclass=true in "addons-892214"
	I0802 17:28:46.238382   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:46.238421   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.239063   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.239099   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.239332   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.239454   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.239773   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.239830   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.240126   13693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 17:28:46.240152   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.241817   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.241854   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0802 17:28:46.241925   13693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 17:28:46.241941   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 17:28:46.241958   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.243344   13693 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0802 17:28:46.243396   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0802 17:28:46.243411   13693 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0802 17:28:46.243430   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.244803   13693 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0802 17:28:46.244822   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0802 17:28:46.244840   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.245804   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.246554   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.246577   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.246795   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.247083   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.247245   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.247384   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.247808   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.248255   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.248276   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.248509   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.248674   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.248811   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.248933   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.249795   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.250158   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.250175   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.250355   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.250497   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.250627   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.250762   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.253583   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0802 17:28:46.253959   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.254408   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.254432   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.254760   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.255007   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.257385   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.259360   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32903
	I0802 17:28:46.259779   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.260305   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.260324   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.260385   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40449
	I0802 17:28:46.260829   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.260993   13693 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0802 17:28:46.261535   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:46.261553   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:46.261837   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0802 17:28:46.261870   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.262319   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.262336   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.262549   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.262637   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.262719   13693 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0802 17:28:46.262732   13693 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0802 17:28:46.262749   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.262798   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.263140   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.263156   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.263616   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.264019   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.264664   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.266110   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.266179   13693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0802 17:28:46.267249   13693 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0802 17:28:46.267483   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46465
	I0802 17:28:46.267898   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.268380   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.268398   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.268421   13693 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0802 17:28:46.268436   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0802 17:28:46.268450   13693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0802 17:28:46.268456   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.269090   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.269286   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.269601   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.270014   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.270051   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.270305   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.270456   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.270581   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.270701   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.270780   13693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0802 17:28:46.270972   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0802 17:28:46.271613   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.271726   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.272034   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.272251   13693 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0802 17:28:46.272266   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0802 17:28:46.272281   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.272302   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.272323   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.272789   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0802 17:28:46.272821   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.272840   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.272855   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.272895   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.273046   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.273068   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.273207   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.273347   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.275061   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.275118   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0802 17:28:46.275894   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.276403   13693 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0802 17:28:46.276419   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.276568   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.276600   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.276731   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.276852   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.276969   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.277781   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0802 17:28:46.277891   13693 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0802 17:28:46.277903   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0802 17:28:46.277918   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.279758   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0802 17:28:46.280637   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0802 17:28:46.281051   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:46.281262   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.281482   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:46.281498   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:46.281884   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0802 17:28:46.281891   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.281919   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.281939   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.282046   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:46.282081   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.282207   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:46.282239   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.282383   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.283584   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:46.284065   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0802 17:28:46.284347   13693 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 17:28:46.284365   13693 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 17:28:46.284382   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.285986   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0802 17:28:46.287090   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.287114   13693 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0802 17:28:46.287574   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.287597   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.287767   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.288073   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.288233   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.288361   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.288448   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0802 17:28:46.288461   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0802 17:28:46.288477   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:46.291193   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.291569   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:46.291598   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:46.291805   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:46.291941   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:46.292053   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:46.292170   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:46.483280   13693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:28:46.483375   13693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 17:28:46.565500   13693 node_ready.go:35] waiting up to 6m0s for node "addons-892214" to be "Ready" ...
	I0802 17:28:46.568465   13693 node_ready.go:49] node "addons-892214" has status "Ready":"True"
	I0802 17:28:46.568488   13693 node_ready.go:38] duration metric: took 2.963043ms for node "addons-892214" to be "Ready" ...
	I0802 17:28:46.568499   13693 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:28:46.580321   13693 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p76fq" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:46.633113   13693 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0802 17:28:46.633135   13693 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0802 17:28:46.714795   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0802 17:28:46.735603   13693 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0802 17:28:46.735643   13693 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0802 17:28:46.736403   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 17:28:46.748407   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0802 17:28:46.775088   13693 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0802 17:28:46.775125   13693 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0802 17:28:46.789973   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0802 17:28:46.790999   13693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0802 17:28:46.791014   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0802 17:28:46.792704   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0802 17:28:46.792719   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0802 17:28:46.806854   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0802 17:28:46.819165   13693 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0802 17:28:46.819186   13693 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0802 17:28:46.822200   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0802 17:28:46.851146   13693 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0802 17:28:46.851172   13693 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0802 17:28:46.862480   13693 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0802 17:28:46.862508   13693 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0802 17:28:46.884028   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 17:28:46.942834   13693 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0802 17:28:46.942854   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0802 17:28:46.969522   13693 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0802 17:28:46.969549   13693 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0802 17:28:46.992935   13693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0802 17:28:46.992953   13693 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0802 17:28:47.005041   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0802 17:28:47.005060   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0802 17:28:47.026333   13693 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0802 17:28:47.026353   13693 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0802 17:28:47.032143   13693 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0802 17:28:47.032163   13693 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0802 17:28:47.054431   13693 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0802 17:28:47.054451   13693 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0802 17:28:47.157337   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0802 17:28:47.177491   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0802 17:28:47.177520   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0802 17:28:47.190565   13693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 17:28:47.190600   13693 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0802 17:28:47.207564   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0802 17:28:47.208626   13693 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0802 17:28:47.208650   13693 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0802 17:28:47.218327   13693 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0802 17:28:47.218348   13693 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0802 17:28:47.228613   13693 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0802 17:28:47.228633   13693 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0802 17:28:47.357958   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0802 17:28:47.357985   13693 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0802 17:28:47.358535   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0802 17:28:47.358557   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0802 17:28:47.386518   13693 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0802 17:28:47.386538   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0802 17:28:47.460968   13693 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0802 17:28:47.460991   13693 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0802 17:28:47.462462   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 17:28:47.566677   13693 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0802 17:28:47.566717   13693 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0802 17:28:47.576030   13693 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0802 17:28:47.576049   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0802 17:28:47.596382   13693 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0802 17:28:47.596412   13693 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0802 17:28:47.718094   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0802 17:28:47.936147   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0802 17:28:47.983378   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0802 17:28:47.983410   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0802 17:28:48.005520   13693 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0802 17:28:48.005552   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0802 17:28:48.174543   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0802 17:28:48.174604   13693 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0802 17:28:48.276395   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0802 17:28:48.492946   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0802 17:28:48.492972   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0802 17:28:48.597064   13693 pod_ready.go:102] pod "coredns-7db6d8ff4d-p76fq" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:48.623012   13693 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.139600751s)
	I0802 17:28:48.623048   13693 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0802 17:28:48.623067   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.908239944s)
	I0802 17:28:48.623148   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:48.623163   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:48.623469   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:48.623481   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:48.623491   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:48.623500   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:48.623795   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:48.623809   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:48.751484   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0802 17:28:48.751502   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0802 17:28:49.012217   13693 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0802 17:28:49.012240   13693 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0802 17:28:49.127201   13693 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-892214" context rescaled to 1 replicas
	I0802 17:28:49.290739   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0802 17:28:50.632254   13693 pod_ready.go:102] pod "coredns-7db6d8ff4d-p76fq" in "kube-system" namespace has status "Ready":"False"
	I0802 17:28:51.165599   13693 pod_ready.go:92] pod "coredns-7db6d8ff4d-p76fq" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.165630   13693 pod_ready.go:81] duration metric: took 4.585283703s for pod "coredns-7db6d8ff4d-p76fq" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.165644   13693 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sk9vd" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.169108   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.432668876s)
	I0802 17:28:51.169159   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:51.169173   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:51.169468   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:51.169484   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:51.169493   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:51.169500   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:51.169521   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:51.169781   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:51.169795   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:51.169816   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:51.271034   13693 pod_ready.go:92] pod "coredns-7db6d8ff4d-sk9vd" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.271056   13693 pod_ready.go:81] duration metric: took 105.405602ms for pod "coredns-7db6d8ff4d-sk9vd" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.271066   13693 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.330385   13693 pod_ready.go:92] pod "etcd-addons-892214" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.330409   13693 pod_ready.go:81] duration metric: took 59.3373ms for pod "etcd-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.330421   13693 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.393267   13693 pod_ready.go:92] pod "kube-apiserver-addons-892214" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.393290   13693 pod_ready.go:81] duration metric: took 62.861059ms for pod "kube-apiserver-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.393303   13693 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.440066   13693 pod_ready.go:92] pod "kube-controller-manager-addons-892214" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.440097   13693 pod_ready.go:81] duration metric: took 46.784985ms for pod "kube-controller-manager-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.440110   13693 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-54c9t" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.501853   13693 pod_ready.go:92] pod "kube-proxy-54c9t" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.501879   13693 pod_ready.go:81] duration metric: took 61.753814ms for pod "kube-proxy-54c9t" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.501892   13693 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.911043   13693 pod_ready.go:92] pod "kube-scheduler-addons-892214" in "kube-system" namespace has status "Ready":"True"
	I0802 17:28:51.911063   13693 pod_ready.go:81] duration metric: took 409.163292ms for pod "kube-scheduler-addons-892214" in "kube-system" namespace to be "Ready" ...
	I0802 17:28:51.911071   13693 pod_ready.go:38] duration metric: took 5.342552875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:28:51.911086   13693 api_server.go:52] waiting for apiserver process to appear ...
	I0802 17:28:51.911149   13693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:28:53.290522   13693 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0802 17:28:53.290558   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:53.293561   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:53.293941   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:53.293972   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:53.294162   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:53.294385   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:53.294568   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:53.294766   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:53.735559   13693 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0802 17:28:53.866599   13693 addons.go:234] Setting addon gcp-auth=true in "addons-892214"
	I0802 17:28:53.866657   13693 host.go:66] Checking if "addons-892214" exists ...
	I0802 17:28:53.866972   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:53.866998   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:53.882142   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35877
	I0802 17:28:53.882640   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:53.883069   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:53.883085   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:53.883410   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:53.883858   13693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:28:53.883883   13693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:28:53.899244   13693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
	I0802 17:28:53.899639   13693 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:28:53.900084   13693 main.go:141] libmachine: Using API Version  1
	I0802 17:28:53.900106   13693 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:28:53.900415   13693 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:28:53.900626   13693 main.go:141] libmachine: (addons-892214) Calling .GetState
	I0802 17:28:53.902410   13693 main.go:141] libmachine: (addons-892214) Calling .DriverName
	I0802 17:28:53.902634   13693 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0802 17:28:53.902659   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHHostname
	I0802 17:28:53.905316   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:53.905898   13693 main.go:141] libmachine: (addons-892214) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:90:54", ip: ""} in network mk-addons-892214: {Iface:virbr1 ExpiryTime:2024-08-02 18:28:05 +0000 UTC Type:0 Mac:52:54:00:00:90:54 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:addons-892214 Clientid:01:52:54:00:00:90:54}
	I0802 17:28:53.905926   13693 main.go:141] libmachine: (addons-892214) DBG | domain addons-892214 has defined IP address 192.168.39.4 and MAC address 52:54:00:00:90:54 in network mk-addons-892214
	I0802 17:28:53.906070   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHPort
	I0802 17:28:53.906236   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHKeyPath
	I0802 17:28:53.906434   13693 main.go:141] libmachine: (addons-892214) Calling .GetSSHUsername
	I0802 17:28:53.906621   13693 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/addons-892214/id_rsa Username:docker}
	I0802 17:28:54.270164   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.521724443s)
	I0802 17:28:54.270208   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270217   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270300   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.480298903s)
	I0802 17:28:54.270347   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270364   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270411   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.463531122s)
	I0802 17:28:54.270438   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.448220756s)
	I0802 17:28:54.270443   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270453   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270457   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270466   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270516   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.270547   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.386494807s)
	I0802 17:28:54.270544   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.270562   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270568   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.270570   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270581   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270590   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270628   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.113264304s)
	I0802 17:28:54.270652   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270661   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270724   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.063131391s)
	I0802 17:28:54.270737   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270744   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270822   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.80834s)
	I0802 17:28:54.270841   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270849   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270919   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.552798995s)
	I0802 17:28:54.270934   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.270942   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.270984   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271003   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271016   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271026   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271032   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271032   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271039   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271043   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271053   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271055   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271067   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.334870873s)
	I0802 17:28:54.271081   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271045   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	W0802 17:28:54.271094   13693 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0802 17:28:54.271131   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271144   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271142   13693 retry.go:31] will retry after 140.165343ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0802 17:28:54.271152   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271167   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271168   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271188   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271195   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271203   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271209   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271277   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.994847417s)
	I0802 17:28:54.271283   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271299   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271305   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271308   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271313   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271322   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271330   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271071   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271346   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271358   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271363   13693 addons.go:475] Verifying addon ingress=true in "addons-892214"
	I0802 17:28:54.271379   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271403   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271410   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271416   13693 addons.go:475] Verifying addon registry=true in "addons-892214"
	I0802 17:28:54.271856   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271881   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271888   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271896   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271902   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271945   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.271964   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.271970   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.271978   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.271987   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.272021   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.272040   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.272047   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.272470   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.272493   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.272500   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.272624   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.272638   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.272761   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.272831   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.272840   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.272866   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.272874   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.271097   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.273350   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.273362   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.273370   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.273377   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.273675   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.273699   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.273706   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.274028   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.274047   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.274052   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.274493   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.274520   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.275722   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.275737   13693 out.go:177] * Verifying registry addon...
	I0802 17:28:54.274947   13693 out.go:177] * Verifying ingress addon...
	I0802 17:28:54.274985   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.275009   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.275025   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.275041   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.276888   13693 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-892214 service yakd-dashboard -n yakd-dashboard
	
	I0802 17:28:54.277252   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.277261   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.277263   13693 addons.go:475] Verifying addon metrics-server=true in "addons-892214"
	I0802 17:28:54.278721   13693 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0802 17:28:54.278853   13693 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0802 17:28:54.322893   13693 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0802 17:28:54.322914   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:54.342343   13693 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0802 17:28:54.342373   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:54.346564   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.346589   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.346962   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.346978   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	W0802 17:28:54.347062   13693 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0802 17:28:54.362192   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:54.362212   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:54.362569   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:54.362612   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:54.362623   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:54.412106   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0802 17:28:54.791977   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:54.792436   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:55.292085   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:55.292634   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:55.341285   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.050501605s)
	I0802 17:28:55.341327   13693 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.43015665s)
	I0802 17:28:55.341357   13693 api_server.go:72] duration metric: took 9.227501733s to wait for apiserver process to appear ...
	I0802 17:28:55.341365   13693 api_server.go:88] waiting for apiserver healthz status ...
	I0802 17:28:55.341367   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:55.341370   13693 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.438718764s)
	I0802 17:28:55.341386   13693 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0802 17:28:55.341387   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:55.341860   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:55.341863   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:55.341884   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:55.341892   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:55.341898   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:55.342178   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:55.342192   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:55.342209   13693 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-892214"
	I0802 17:28:55.342827   13693 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0802 17:28:55.343591   13693 out.go:177] * Verifying csi-hostpath-driver addon...
	I0802 17:28:55.344972   13693 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0802 17:28:55.345683   13693 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0802 17:28:55.345893   13693 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0802 17:28:55.345985   13693 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0802 17:28:55.346004   13693 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0802 17:28:55.349040   13693 api_server.go:141] control plane version: v1.30.3
	I0802 17:28:55.349067   13693 api_server.go:131] duration metric: took 7.695276ms to wait for apiserver health ...
	I0802 17:28:55.349076   13693 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 17:28:55.366818   13693 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0802 17:28:55.366842   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:55.384261   13693 system_pods.go:59] 19 kube-system pods found
	I0802 17:28:55.384290   13693 system_pods.go:61] "coredns-7db6d8ff4d-p76fq" [670e26de-e1a8-40ee-acf4-c6d4ce7b4d93] Running
	I0802 17:28:55.384294   13693 system_pods.go:61] "coredns-7db6d8ff4d-sk9vd" [f3173627-759d-4a33-bb57-808ee415d0c5] Running
	I0802 17:28:55.384301   13693 system_pods.go:61] "csi-hostpath-attacher-0" [227e1c3a-6e8d-4f98-b792-283449039f73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0802 17:28:55.384305   13693 system_pods.go:61] "csi-hostpath-resizer-0" [c6d4e68a-d483-4117-a6fd-d0a19698bb11] Pending
	I0802 17:28:55.384311   13693 system_pods.go:61] "csi-hostpathplugin-f6h9n" [07a6f05c-29ec-4f7d-a29e-9e9eae21e2b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0802 17:28:55.384315   13693 system_pods.go:61] "etcd-addons-892214" [1e9d0def-524d-43ca-b29c-2e1e66d2d47b] Running
	I0802 17:28:55.384319   13693 system_pods.go:61] "kube-apiserver-addons-892214" [731cdc04-a76d-4875-a043-754d4bfcd0f9] Running
	I0802 17:28:55.384322   13693 system_pods.go:61] "kube-controller-manager-addons-892214" [363f4529-972c-4645-b8e2-843e479d5b37] Running
	I0802 17:28:55.384327   13693 system_pods.go:61] "kube-ingress-dns-minikube" [ee00722b-6b3b-4626-b856-87ffccf9d0d2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0802 17:28:55.384332   13693 system_pods.go:61] "kube-proxy-54c9t" [cd068d1d-f377-4c1f-b13b-45c1df8b4eb2] Running
	I0802 17:28:55.384336   13693 system_pods.go:61] "kube-scheduler-addons-892214" [4f9abb24-eb93-4fe9-9de4-929eb510eed3] Running
	I0802 17:28:55.384341   13693 system_pods.go:61] "metrics-server-c59844bb4-smv7j" [8ea8885b-a830-4d58-80b8-a67cc4f26748] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0802 17:28:55.384350   13693 system_pods.go:61] "nvidia-device-plugin-daemonset-7hdnl" [6af5e808-ef75-4f5b-8567-c08fc5f82515] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0802 17:28:55.384360   13693 system_pods.go:61] "registry-698f998955-cs8q7" [7d2c31bd-4360-46bd-82c0-b2258ba69944] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0802 17:28:55.384369   13693 system_pods.go:61] "registry-proxy-ntww4" [59de3da3-a31c-480b-8715-6dcecc3c01e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0802 17:28:55.384375   13693 system_pods.go:61] "snapshot-controller-745499f584-rzz47" [3db6259f-c6b3-4922-a452-a354b7ef788e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0802 17:28:55.384381   13693 system_pods.go:61] "snapshot-controller-745499f584-tnv6t" [394a2f4c-f536-4fb7-b476-2d8febddc5b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0802 17:28:55.384387   13693 system_pods.go:61] "storage-provisioner" [f4df5f76-bb9c-40a3-b0db-14ac7972a88f] Running
	I0802 17:28:55.384394   13693 system_pods.go:61] "tiller-deploy-6677d64bcd-t67mn" [a61d96f6-f02c-4320-a0ef-8562603e4751] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0802 17:28:55.384403   13693 system_pods.go:74] duration metric: took 35.320792ms to wait for pod list to return data ...
	I0802 17:28:55.384413   13693 default_sa.go:34] waiting for default service account to be created ...
	I0802 17:28:55.409155   13693 default_sa.go:45] found service account: "default"
	I0802 17:28:55.409179   13693 default_sa.go:55] duration metric: took 24.760351ms for default service account to be created ...
	I0802 17:28:55.409189   13693 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 17:28:55.425516   13693 system_pods.go:86] 19 kube-system pods found
	I0802 17:28:55.425547   13693 system_pods.go:89] "coredns-7db6d8ff4d-p76fq" [670e26de-e1a8-40ee-acf4-c6d4ce7b4d93] Running
	I0802 17:28:55.425552   13693 system_pods.go:89] "coredns-7db6d8ff4d-sk9vd" [f3173627-759d-4a33-bb57-808ee415d0c5] Running
	I0802 17:28:55.425559   13693 system_pods.go:89] "csi-hostpath-attacher-0" [227e1c3a-6e8d-4f98-b792-283449039f73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0802 17:28:55.425564   13693 system_pods.go:89] "csi-hostpath-resizer-0" [c6d4e68a-d483-4117-a6fd-d0a19698bb11] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0802 17:28:55.425573   13693 system_pods.go:89] "csi-hostpathplugin-f6h9n" [07a6f05c-29ec-4f7d-a29e-9e9eae21e2b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0802 17:28:55.425578   13693 system_pods.go:89] "etcd-addons-892214" [1e9d0def-524d-43ca-b29c-2e1e66d2d47b] Running
	I0802 17:28:55.425586   13693 system_pods.go:89] "kube-apiserver-addons-892214" [731cdc04-a76d-4875-a043-754d4bfcd0f9] Running
	I0802 17:28:55.425591   13693 system_pods.go:89] "kube-controller-manager-addons-892214" [363f4529-972c-4645-b8e2-843e479d5b37] Running
	I0802 17:28:55.425596   13693 system_pods.go:89] "kube-ingress-dns-minikube" [ee00722b-6b3b-4626-b856-87ffccf9d0d2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0802 17:28:55.425600   13693 system_pods.go:89] "kube-proxy-54c9t" [cd068d1d-f377-4c1f-b13b-45c1df8b4eb2] Running
	I0802 17:28:55.425604   13693 system_pods.go:89] "kube-scheduler-addons-892214" [4f9abb24-eb93-4fe9-9de4-929eb510eed3] Running
	I0802 17:28:55.425612   13693 system_pods.go:89] "metrics-server-c59844bb4-smv7j" [8ea8885b-a830-4d58-80b8-a67cc4f26748] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0802 17:28:55.425618   13693 system_pods.go:89] "nvidia-device-plugin-daemonset-7hdnl" [6af5e808-ef75-4f5b-8567-c08fc5f82515] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0802 17:28:55.425623   13693 system_pods.go:89] "registry-698f998955-cs8q7" [7d2c31bd-4360-46bd-82c0-b2258ba69944] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0802 17:28:55.425629   13693 system_pods.go:89] "registry-proxy-ntww4" [59de3da3-a31c-480b-8715-6dcecc3c01e6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0802 17:28:55.425638   13693 system_pods.go:89] "snapshot-controller-745499f584-rzz47" [3db6259f-c6b3-4922-a452-a354b7ef788e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0802 17:28:55.425644   13693 system_pods.go:89] "snapshot-controller-745499f584-tnv6t" [394a2f4c-f536-4fb7-b476-2d8febddc5b8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0802 17:28:55.425651   13693 system_pods.go:89] "storage-provisioner" [f4df5f76-bb9c-40a3-b0db-14ac7972a88f] Running
	I0802 17:28:55.425656   13693 system_pods.go:89] "tiller-deploy-6677d64bcd-t67mn" [a61d96f6-f02c-4320-a0ef-8562603e4751] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0802 17:28:55.425670   13693 system_pods.go:126] duration metric: took 16.471119ms to wait for k8s-apps to be running ...
	I0802 17:28:55.425683   13693 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 17:28:55.425726   13693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:28:55.430022   13693 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0802 17:28:55.430043   13693 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0802 17:28:55.454076   13693 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0802 17:28:55.454096   13693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0802 17:28:55.543160   13693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0802 17:28:55.783999   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:55.784959   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:55.853093   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:56.217709   13693 system_svc.go:56] duration metric: took 792.015759ms WaitForService to wait for kubelet
	I0802 17:28:56.217741   13693 kubeadm.go:582] duration metric: took 10.10388483s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:28:56.217766   13693 node_conditions.go:102] verifying NodePressure condition ...
	I0802 17:28:56.217874   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.805726082s)
	I0802 17:28:56.217926   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:56.217944   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:56.218167   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:56.218180   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:56.218188   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:56.218194   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:56.218434   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:56.218456   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:56.218440   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:56.220579   13693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:28:56.220598   13693 node_conditions.go:123] node cpu capacity is 2
	I0802 17:28:56.220609   13693 node_conditions.go:105] duration metric: took 2.838368ms to run NodePressure ...
	I0802 17:28:56.220622   13693 start.go:241] waiting for startup goroutines ...
	I0802 17:28:56.284744   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:56.285109   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:56.352062   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:56.861520   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:56.868169   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:56.914804   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:57.018522   13693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.475322379s)
	I0802 17:28:57.018597   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:57.018614   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:57.018884   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:57.018948   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:57.018966   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:57.018978   13693 main.go:141] libmachine: Making call to close driver server
	I0802 17:28:57.018990   13693 main.go:141] libmachine: (addons-892214) Calling .Close
	I0802 17:28:57.019244   13693 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:28:57.019261   13693 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:28:57.019246   13693 main.go:141] libmachine: (addons-892214) DBG | Closing plugin on server side
	I0802 17:28:57.020765   13693 addons.go:475] Verifying addon gcp-auth=true in "addons-892214"
	I0802 17:28:57.022295   13693 out.go:177] * Verifying gcp-auth addon...
	I0802 17:28:57.024175   13693 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0802 17:28:57.040965   13693 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0802 17:28:57.040984   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:57.284003   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:57.285413   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:57.352566   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:57.529875   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:57.828315   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:57.828473   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:57.855603   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:58.030083   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:58.283963   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:58.284278   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:58.351277   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:58.527323   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:58.784793   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:58.784928   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:58.851288   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:59.028216   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:59.284931   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:59.284984   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:59.351326   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:28:59.527766   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:28:59.785454   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:28:59.785816   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:28:59.858184   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:00.028180   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:00.283845   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:00.284391   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:00.351972   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:00.528563   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:00.787357   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:00.787674   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:00.850721   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:01.027449   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:01.283605   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:01.283857   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:01.352853   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:01.529114   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:01.784456   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:01.785132   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:01.854608   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:02.029118   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:02.285036   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:02.285282   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:02.353966   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:02.527470   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:02.784032   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:02.785170   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:02.851508   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:03.027985   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:03.282779   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:03.283037   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:03.351165   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:03.527535   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:03.783694   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:03.783978   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:03.851205   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:04.027973   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:04.283363   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:04.283781   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:04.351973   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:04.527799   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:04.782743   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:04.785082   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:04.851888   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:05.028125   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:05.283673   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:05.283741   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:05.352174   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:05.528410   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:05.793272   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:05.793290   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:05.851567   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:06.028729   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:06.283622   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:06.284031   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:06.352769   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:06.528437   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:06.784417   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:06.784704   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:06.851636   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:07.028003   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:07.283389   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:07.284837   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:07.351526   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:07.527764   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:07.783791   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:07.788226   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:07.851389   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:08.028130   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:08.283634   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:08.284369   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:08.350879   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:08.527425   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:08.784139   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:08.784429   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:08.851024   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:09.027525   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:09.284239   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:09.284305   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:09.350531   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:09.528886   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:09.783671   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:09.785079   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:09.851258   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:10.028205   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:10.285810   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:10.285883   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:10.353499   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:10.527336   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:10.783960   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:10.784914   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:10.851187   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:11.027970   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:11.284561   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:11.286326   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:11.352860   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:11.531821   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:11.786129   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:11.786685   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:11.851210   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:12.028197   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:12.283903   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:12.284683   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:12.351693   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:12.528110   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:12.782942   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:12.783498   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:12.851220   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:13.027573   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:13.286152   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:13.287144   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:13.351558   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:13.528117   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:13.784226   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:13.784669   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:13.850695   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:14.028937   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:14.283758   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:14.284809   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:14.350935   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:14.527569   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:14.784772   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:14.785412   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:14.851815   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:15.028323   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:15.287890   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:15.288012   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:15.352522   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:15.527838   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:15.799529   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:15.800420   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:15.854119   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:16.028043   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:16.283744   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:16.283881   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:16.351792   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:16.527916   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:16.783401   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:16.784277   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:16.850906   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:17.027291   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:17.285109   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:17.285351   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:17.351823   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:17.528022   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:17.784654   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:17.785372   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:17.851867   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:18.329103   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:18.329904   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:18.330465   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:18.351059   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:18.528110   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:18.783693   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:18.783872   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:18.851665   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:19.028013   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:19.283974   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:19.284013   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:19.353821   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:19.530285   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:19.783724   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:19.784554   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:19.851079   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:20.027714   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:20.283725   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:20.283842   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:20.350755   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:20.529235   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:20.785042   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:20.785556   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:20.851784   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:21.028342   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:21.282891   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:21.283412   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:21.350698   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:21.527507   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:21.783750   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:21.784591   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:21.851461   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:22.027792   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:22.283684   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:22.284596   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:22.352095   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:22.534331   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:22.783884   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:22.784084   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:22.851481   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:23.027950   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:23.285051   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:23.285070   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:23.352894   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:23.527899   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:23.785263   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:23.785561   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:23.851175   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:24.029469   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:24.286740   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:24.286863   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:24.352851   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:24.527308   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:24.784679   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:24.784826   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:24.851521   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:25.029726   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:25.283525   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:25.284475   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:25.352280   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:25.528135   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:25.782385   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:25.783554   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:25.850944   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:26.027567   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:26.285373   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:26.287058   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:26.351612   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:26.527769   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:26.784301   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:26.785530   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:26.852865   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:27.028581   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:27.288328   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:27.289507   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:27.350513   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:27.527823   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:27.784433   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:27.784984   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:27.851575   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:28.027985   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:28.284865   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:28.284973   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:28.351260   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:28.527928   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:28.783702   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:28.783860   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:28.851424   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:29.027983   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:29.285017   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:29.285428   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:29.351721   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:29.528487   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:29.787711   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:29.789207   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:29.856303   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:30.027466   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:30.284990   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:30.285395   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:30.351790   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:30.527980   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:30.784093   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:30.784403   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:30.852085   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:31.028253   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:31.283764   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:31.284863   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:31.351621   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:31.533122   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:31.785884   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:31.787012   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:31.850544   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:32.027802   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:32.302724   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:32.303120   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:32.352034   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:32.528773   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:32.784949   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:32.785385   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:32.867342   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:33.027973   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:33.286986   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:33.288384   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:33.351447   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:33.528049   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:33.783800   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:33.783942   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:33.851536   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:34.028098   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:34.285222   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:34.285569   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:34.352266   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:34.528494   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:34.784581   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:34.784938   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:34.851658   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:35.028965   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:35.285635   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:35.288119   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:35.351496   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:35.529243   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:35.783734   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:35.783906   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:35.934983   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:36.027443   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:36.284573   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:36.284883   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:36.350940   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:36.527134   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:36.784614   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:36.784626   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:36.851705   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:37.027748   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:37.284267   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:37.285187   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:37.351442   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:37.527493   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:37.783982   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:37.785179   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:37.852710   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:38.029314   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:38.283730   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:38.283839   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:38.350947   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:38.527522   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:38.784667   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0802 17:29:38.785977   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:38.851165   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:39.028876   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:39.283502   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:39.284535   13693 kapi.go:107] duration metric: took 45.005812272s to wait for kubernetes.io/minikube-addons=registry ...
	I0802 17:29:39.351275   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:39.527957   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:39.783806   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:39.850673   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:40.028776   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:40.283413   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:40.351800   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:40.528209   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:40.783142   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:40.851171   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:41.028688   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:41.282946   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:41.351236   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:41.528135   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:41.783035   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:41.851293   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:42.027926   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:42.283138   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:42.356165   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:42.602922   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:42.784076   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:42.851620   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:43.027804   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:43.284023   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:43.351013   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:43.528042   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:43.783426   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:43.851629   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:44.028337   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:44.282652   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:44.350657   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:44.527961   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:45.056080   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:45.056541   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:45.056942   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:45.282862   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:45.351629   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:45.527634   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:45.783432   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:45.851272   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:46.028312   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:46.282702   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:46.350973   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:46.527435   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:46.783183   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:46.852930   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:47.028156   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:47.370988   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:47.373916   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:47.529509   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:47.783859   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:47.851457   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:48.028115   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:48.285916   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:48.351928   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:48.527331   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:48.783254   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:48.851244   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:49.028529   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:49.283835   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:49.353408   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:49.528064   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:49.782552   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:49.850278   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:50.027784   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:50.283771   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:50.352536   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:50.528362   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:50.783258   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:50.851453   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:51.028387   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:51.284859   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:51.351010   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:51.527304   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:51.783325   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:51.851305   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:52.028689   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:52.283638   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:52.350745   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:52.527814   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:53.033329   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:53.034413   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:53.034463   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:53.283905   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:53.351349   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:53.527381   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:53.783475   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:53.851784   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:54.028127   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:54.282613   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:54.350997   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:54.527875   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:54.785624   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:54.850832   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:55.033497   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:55.293610   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:55.350689   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:55.527938   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:55.784032   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:55.853182   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:56.027191   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:56.284221   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:56.353828   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:56.527749   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:56.782777   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:56.851207   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:57.031574   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:57.284499   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:57.352397   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:57.527872   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:57.800114   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:57.850852   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:58.028218   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:58.283011   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:58.351087   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:58.527530   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:58.783528   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:58.850662   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:59.028365   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:59.282952   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:59.351085   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:29:59.528454   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:29:59.786595   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:29:59.856200   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:00.028300   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:00.283271   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:00.350923   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:00.528374   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:00.783005   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:00.851386   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:01.028246   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:01.283628   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:01.350499   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:01.528441   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:01.790738   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:01.858792   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:02.320083   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:02.320826   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:02.350930   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:02.527930   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:02.784401   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:02.851472   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:03.027166   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:03.282683   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:03.356326   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:03.527399   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:03.782970   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:03.851394   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:04.027732   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:04.283616   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:04.350400   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:04.528511   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:04.966333   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:04.968645   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:05.028324   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:05.284073   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:05.351623   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:05.527780   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:05.784018   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:05.851283   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:06.028209   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:06.284304   13693 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0802 17:30:06.351082   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:06.528943   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:06.786232   13693 kapi.go:107] duration metric: took 1m12.507374779s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0802 17:30:06.853268   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:07.027259   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:07.350996   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:07.538067   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:08.162654   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:08.164327   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:08.351582   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:08.528458   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:08.851465   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:09.027967   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:09.350551   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:09.528028   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:09.850791   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:10.032575   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:10.351443   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:10.528139   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:10.851218   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:11.027960   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:11.351306   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:11.527593   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:11.852063   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:12.028395   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:12.352640   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:12.527465   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0802 17:30:12.853283   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:13.028142   13693 kapi.go:107] duration metric: took 1m16.003960207s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0802 17:30:13.030072   13693 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-892214 cluster.
	I0802 17:30:13.031557   13693 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0802 17:30:13.032954   13693 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0802 17:30:13.376702   13693 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0802 17:30:13.850864   13693 kapi.go:107] duration metric: took 1m18.504968965s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0802 17:30:13.852881   13693 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, helm-tiller, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0802 17:30:13.854184   13693 addons.go:510] duration metric: took 1m27.740310331s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns helm-tiller inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0802 17:30:13.854224   13693 start.go:246] waiting for cluster config update ...
	I0802 17:30:13.854249   13693 start.go:255] writing updated cluster config ...
	I0802 17:30:13.854517   13693 ssh_runner.go:195] Run: rm -f paused
	I0802 17:30:13.902530   13693 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 17:30:13.904477   13693 out.go:177] * Done! kubectl is now configured to use "addons-892214" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.546112129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eaab1ff4-cbfe-4ecd-af45-cb30f5310ca8 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.548313486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cf65e28-d1e7-4b58-8f45-e8ffbd7e72bc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.551266076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722620209551232387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cf65e28-d1e7-4b58-8f45-e8ffbd7e72bc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.552307977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c63ca484-1a21-4fe0-b426-626c4480b7b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.552529438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c63ca484-1a21-4fe0-b426-626c4480b7b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.553207587Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e10b59d4eecb74abe0df5a061564fe8277241c52c45154450138bd4e44fa831,PodSandboxId:a2836adc9b342a37c7d517a440c55235d1eec6a9596d97f5d5efd302cadcac50,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722620006890719981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m5mgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deaf7ef-f897-44a9-a367-3c6c60bb68fc,},Annotations:map[string]string{io.kubernetes.container.hash: d18b7951,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a277059d0f1883c326b05f898cd9893b36e490b43f833dc365fad150063640,PodSandboxId:9754415aeff5d053a49895b7801508b7d3317e01472faa321572fe1143554b06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722619865460114754,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc59e354-2e50-4658-9768-c1a886aff1aa,},Annotations:map[string]string{io.kubernet
es.container.hash: d36b3d16,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6adf8237f42cef5716242c66f6c3714a887a31742ac8235490a44eaa5341302,PodSandboxId:50ae3db6a32299f52784de81a8c2562b2f13665e9a59fbb2630e891989413348,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722619816993617258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da3a730e-656e-41dc-9
be1-768d8d360cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 70f337e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc277ab8447cd0197fdb7efbfbb840a7d05fd1175186bf8c40541a8c73cdbd2,PodSandboxId:edb501abf8809a4819c9f3ebf7a1c885c001d8f0ab150e849cb08b7859e73d8c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722619780573079093,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4ghlh,io.kubernetes.pod.names
pace: local-path-storage,io.kubernetes.pod.uid: c9f16559-1e63-465e-8e8e-47fcf6b7535d,},Annotations:map[string]string{io.kubernetes.container.hash: c3a47647,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab25730f9ac6e2cc5fe0f1219e2ff23d173087436602c41f016cfeaa21cfa230,PodSandboxId:5a6106fe3cb999308d278a901379e243d5852e5909e5f2e8e6168dc4265cf702,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722619771349736858,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metri
cs-server-c59844bb4-smv7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea8885b-a830-4d58-80b8-a67cc4f26748,},Annotations:map[string]string{io.kubernetes.container.hash: 6d64523a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ce9fb7b9bc996c5be8b385a7517b8930e8b30ff0d5cabd81be015b26da9649,PodSandboxId:9ab9eff506f73aa947624accb9694fb47ae9410e5729aebf45e3faa29b51586a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722619732568873183,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4df5f76-bb9c-40a3-b0db-14ac7972a88f,},Annotations:map[string]string{io.kubernetes.container.hash: d5774e50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd173b9de65233df543ccbe56ec279179c4d707ab3764872d0bbf1188995bd3,PodSandboxId:5863a810d3cae8dd863a9f250648cc94ea09bc0a8eb155d95087dbe4c87dbba0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722619729060551528,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sk9vd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3173627-759d-4a33-bb57-808ee415d0c5,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54f0d9c8ff29df8157867f63de207612ea99b2723567955d05135014303538c,PodSandboxId:6c53e375d81c8365f0d0e5e0048683abafc5e6ec01726d01e54bb317a2dd657e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722619727093009673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54c9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd068d1d-f377-4c1f-b13b-45c1df8b4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6701c72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad1f1d7d140c75d74d0c58613bbbc1f088e6e00803d0fa2bbc4c4327b5aca2f9,PodSandboxId:2c551c1f8ac4561b470c5c4b2412d4ea119df5dc2ff769c40410eb1217c5ce87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722619706380858141,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9bd0649440d8a7a0d3586b7c1ed3f8,},Annotations:map[string]string{io.kubernetes.container.hash: b2751d44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9b12187ebce275afbbd1f90da2a34131379c6e1b57c0f0c6d6e5b7373a8ef6,PodSandboxId:a5207cbdabdb7aa1b7356b4a13a80d0b0557878f3ae8010b053dbea7cf39fede,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722619706309891757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e815c1f6abe53ec260e4ea81309e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b777f9568ba74a8b254bd1c9d44d99a014205335a86a1d6a1626662be88edd,PodSandboxId:da6052a1b07a528db074b430065d432725a64212f473be92902437c0195dfaff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722619706331682857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d001318cb520bd66242c1c022a2feb0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e5b4ce630c7553e859a8a23cb6c2a4d2fe9022324b3c7504826789757a2ca,PodSandboxId:40c82970196e3b3fa0f8740a4d529d9b150c277f8322341b3a4ff1ed295cf89d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722619706296440181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc4102938a41902a715ba0b7b11dc9f6,},Annotations:map[string]string{io.kubernetes.container.hash: f045d02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c63ca484-1a21-4fe0-b426-626c4480b7b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.594964135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ed0597e-2a83-489d-bab0-ff03aa553f43 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.595041729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ed0597e-2a83-489d-bab0-ff03aa553f43 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.596482858Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbc6adbb-4d57-438f-8bc4-83f9c400cf10 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.597766219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722620209597728208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbc6adbb-4d57-438f-8bc4-83f9c400cf10 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.598198963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51ca11b3-aa49-48ce-a022-8d9399bbde0d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.598260458Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51ca11b3-aa49-48ce-a022-8d9399bbde0d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.598739931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e10b59d4eecb74abe0df5a061564fe8277241c52c45154450138bd4e44fa831,PodSandboxId:a2836adc9b342a37c7d517a440c55235d1eec6a9596d97f5d5efd302cadcac50,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722620006890719981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m5mgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deaf7ef-f897-44a9-a367-3c6c60bb68fc,},Annotations:map[string]string{io.kubernetes.container.hash: d18b7951,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a277059d0f1883c326b05f898cd9893b36e490b43f833dc365fad150063640,PodSandboxId:9754415aeff5d053a49895b7801508b7d3317e01472faa321572fe1143554b06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722619865460114754,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc59e354-2e50-4658-9768-c1a886aff1aa,},Annotations:map[string]string{io.kubernet
es.container.hash: d36b3d16,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6adf8237f42cef5716242c66f6c3714a887a31742ac8235490a44eaa5341302,PodSandboxId:50ae3db6a32299f52784de81a8c2562b2f13665e9a59fbb2630e891989413348,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722619816993617258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da3a730e-656e-41dc-9
be1-768d8d360cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 70f337e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc277ab8447cd0197fdb7efbfbb840a7d05fd1175186bf8c40541a8c73cdbd2,PodSandboxId:edb501abf8809a4819c9f3ebf7a1c885c001d8f0ab150e849cb08b7859e73d8c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722619780573079093,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4ghlh,io.kubernetes.pod.names
pace: local-path-storage,io.kubernetes.pod.uid: c9f16559-1e63-465e-8e8e-47fcf6b7535d,},Annotations:map[string]string{io.kubernetes.container.hash: c3a47647,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab25730f9ac6e2cc5fe0f1219e2ff23d173087436602c41f016cfeaa21cfa230,PodSandboxId:5a6106fe3cb999308d278a901379e243d5852e5909e5f2e8e6168dc4265cf702,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722619771349736858,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metri
cs-server-c59844bb4-smv7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea8885b-a830-4d58-80b8-a67cc4f26748,},Annotations:map[string]string{io.kubernetes.container.hash: 6d64523a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ce9fb7b9bc996c5be8b385a7517b8930e8b30ff0d5cabd81be015b26da9649,PodSandboxId:9ab9eff506f73aa947624accb9694fb47ae9410e5729aebf45e3faa29b51586a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722619732568873183,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4df5f76-bb9c-40a3-b0db-14ac7972a88f,},Annotations:map[string]string{io.kubernetes.container.hash: d5774e50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd173b9de65233df543ccbe56ec279179c4d707ab3764872d0bbf1188995bd3,PodSandboxId:5863a810d3cae8dd863a9f250648cc94ea09bc0a8eb155d95087dbe4c87dbba0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722619729060551528,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sk9vd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3173627-759d-4a33-bb57-808ee415d0c5,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54f0d9c8ff29df8157867f63de207612ea99b2723567955d05135014303538c,PodSandboxId:6c53e375d81c8365f0d0e5e0048683abafc5e6ec01726d01e54bb317a2dd657e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722619727093009673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54c9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd068d1d-f377-4c1f-b13b-45c1df8b4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6701c72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad1f1d7d140c75d74d0c58613bbbc1f088e6e00803d0fa2bbc4c4327b5aca2f9,PodSandboxId:2c551c1f8ac4561b470c5c4b2412d4ea119df5dc2ff769c40410eb1217c5ce87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722619706380858141,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9bd0649440d8a7a0d3586b7c1ed3f8,},Annotations:map[string]string{io.kubernetes.container.hash: b2751d44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9b12187ebce275afbbd1f90da2a34131379c6e1b57c0f0c6d6e5b7373a8ef6,PodSandboxId:a5207cbdabdb7aa1b7356b4a13a80d0b0557878f3ae8010b053dbea7cf39fede,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722619706309891757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e815c1f6abe53ec260e4ea81309e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b777f9568ba74a8b254bd1c9d44d99a014205335a86a1d6a1626662be88edd,PodSandboxId:da6052a1b07a528db074b430065d432725a64212f473be92902437c0195dfaff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722619706331682857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d001318cb520bd66242c1c022a2feb0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e5b4ce630c7553e859a8a23cb6c2a4d2fe9022324b3c7504826789757a2ca,PodSandboxId:40c82970196e3b3fa0f8740a4d529d9b150c277f8322341b3a4ff1ed295cf89d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722619706296440181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc4102938a41902a715ba0b7b11dc9f6,},Annotations:map[string]string{io.kubernetes.container.hash: f045d02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51ca11b3-aa49-48ce-a022-8d9399bbde0d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.616909259Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=acb26d06-ec5a-47aa-b772-5c95058ab010 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.617232343Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a2836adc9b342a37c7d517a440c55235d1eec6a9596d97f5d5efd302cadcac50,Metadata:&PodSandboxMetadata{Name:hello-world-app-6778b5fc9f-m5mgj,Uid:9deaf7ef-f897-44a9-a367-3c6c60bb68fc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722620004366222237,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m5mgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deaf7ef-f897-44a9-a367-3c6c60bb68fc,pod-template-hash: 6778b5fc9f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T17:33:24.054279647Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9754415aeff5d053a49895b7801508b7d3317e01472faa321572fe1143554b06,Metadata:&PodSandboxMetadata{Name:nginx,Uid:fc59e354-2e50-4658-9768-c1a886aff1aa,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1722619861583512620,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc59e354-2e50-4658-9768-c1a886aff1aa,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T17:31:01.273732014Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50ae3db6a32299f52784de81a8c2562b2f13665e9a59fbb2630e891989413348,Metadata:&PodSandboxMetadata{Name:busybox,Uid:da3a730e-656e-41dc-9be1-768d8d360cb8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722619814518944737,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da3a730e-656e-41dc-9be1-768d8d360cb8,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T17:30:14.196237718Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:edb501abf8809a4819
c9f3ebf7a1c885c001d8f0ab150e849cb08b7859e73d8c,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-8d985888d-4ghlh,Uid:c9f16559-1e63-465e-8e8e-47fcf6b7535d,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722619732236747247,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4ghlh,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c9f16559-1e63-465e-8e8e-47fcf6b7535d,pod-template-hash: 8d985888d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T17:28:51.610523488Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a6106fe3cb999308d278a901379e243d5852e5909e5f2e8e6168dc4265cf702,Metadata:&PodSandboxMetadata{Name:metrics-server-c59844bb4-smv7j,Uid:8ea8885b-a830-4d58-80b8-a67cc4f26748,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722619731962079425,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes
.pod.name: metrics-server-c59844bb4-smv7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea8885b-a830-4d58-80b8-a67cc4f26748,k8s-app: metrics-server,pod-template-hash: c59844bb4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T17:28:51.349165490Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ab9eff506f73aa947624accb9694fb47ae9410e5729aebf45e3faa29b51586a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f4df5f76-bb9c-40a3-b0db-14ac7972a88f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722619731807220093,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4df5f76-bb9c-40a3-b0db-14ac7972a88f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"ann
otations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-02T17:28:51.195341735Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5863a810d3cae8dd863a9f250648cc94ea09bc0a8eb155d95087dbe4c87dbba0,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sk9vd,Uid:f3173627-759d-4a33-bb57-808ee415d0c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722619726582417469,Labels:map[string]string{io.kubernetes.contain
er.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sk9vd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3173627-759d-4a33-bb57-808ee415d0c5,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T17:28:46.270092489Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6c53e375d81c8365f0d0e5e0048683abafc5e6ec01726d01e54bb317a2dd657e,Metadata:&PodSandboxMetadata{Name:kube-proxy-54c9t,Uid:cd068d1d-f377-4c1f-b13b-45c1df8b4eb2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722619726479981786,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-54c9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd068d1d-f377-4c1f-b13b-45c1df8b4eb2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T17:28:45.550895496Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:da6052a1b07a528db074b430065d432725a64212f473be92902437c0195dfaff,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-892214,Uid:3d001318cb520bd66242c1c022a2feb0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722619706152358033,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d001318cb520bd66242c1c022a2feb0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3d001318cb520bd66242c1c022a2feb0,kubernetes.io/config.seen: 2024-08-02T17:28:25.689549824Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:40c82970196e3b3fa0f8740a4d529d9b150c277f8322341b3a4ff1ed295cf89d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-892214,Uid:cc4102938a41902a715ba0b7b11dc9f6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722619706144965
694,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc4102938a41902a715ba0b7b11dc9f6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.4:8443,kubernetes.io/config.hash: cc4102938a41902a715ba0b7b11dc9f6,kubernetes.io/config.seen: 2024-08-02T17:28:25.689548857Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a5207cbdabdb7aa1b7356b4a13a80d0b0557878f3ae8010b053dbea7cf39fede,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-892214,Uid:51e815c1f6abe53ec260e4ea81309e6e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722619706143015927,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e815c1f6ab
e53ec260e4ea81309e6e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 51e815c1f6abe53ec260e4ea81309e6e,kubernetes.io/config.seen: 2024-08-02T17:28:25.689550661Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c551c1f8ac4561b470c5c4b2412d4ea119df5dc2ff769c40410eb1217c5ce87,Metadata:&PodSandboxMetadata{Name:etcd-addons-892214,Uid:bb9bd0649440d8a7a0d3586b7c1ed3f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722619706142290012,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9bd0649440d8a7a0d3586b7c1ed3f8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.4:2379,kubernetes.io/config.hash: bb9bd0649440d8a7a0d3586b7c1ed3f8,kubernetes.io/config.seen: 2024-08-02T17:28:25.689545413Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file=
"otel-collector/interceptors.go:74" id=acb26d06-ec5a-47aa-b772-5c95058ab010 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.617890088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d77457ac-3dd8-43ef-bfa1-b0810fc6219f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.617974799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d77457ac-3dd8-43ef-bfa1-b0810fc6219f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.620042649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e10b59d4eecb74abe0df5a061564fe8277241c52c45154450138bd4e44fa831,PodSandboxId:a2836adc9b342a37c7d517a440c55235d1eec6a9596d97f5d5efd302cadcac50,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722620006890719981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m5mgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deaf7ef-f897-44a9-a367-3c6c60bb68fc,},Annotations:map[string]string{io.kubernetes.container.hash: d18b7951,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a277059d0f1883c326b05f898cd9893b36e490b43f833dc365fad150063640,PodSandboxId:9754415aeff5d053a49895b7801508b7d3317e01472faa321572fe1143554b06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722619865460114754,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc59e354-2e50-4658-9768-c1a886aff1aa,},Annotations:map[string]string{io.kubernet
es.container.hash: d36b3d16,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6adf8237f42cef5716242c66f6c3714a887a31742ac8235490a44eaa5341302,PodSandboxId:50ae3db6a32299f52784de81a8c2562b2f13665e9a59fbb2630e891989413348,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722619816993617258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da3a730e-656e-41dc-9
be1-768d8d360cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 70f337e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc277ab8447cd0197fdb7efbfbb840a7d05fd1175186bf8c40541a8c73cdbd2,PodSandboxId:edb501abf8809a4819c9f3ebf7a1c885c001d8f0ab150e849cb08b7859e73d8c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722619780573079093,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4ghlh,io.kubernetes.pod.names
pace: local-path-storage,io.kubernetes.pod.uid: c9f16559-1e63-465e-8e8e-47fcf6b7535d,},Annotations:map[string]string{io.kubernetes.container.hash: c3a47647,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab25730f9ac6e2cc5fe0f1219e2ff23d173087436602c41f016cfeaa21cfa230,PodSandboxId:5a6106fe3cb999308d278a901379e243d5852e5909e5f2e8e6168dc4265cf702,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722619771349736858,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metri
cs-server-c59844bb4-smv7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea8885b-a830-4d58-80b8-a67cc4f26748,},Annotations:map[string]string{io.kubernetes.container.hash: 6d64523a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ce9fb7b9bc996c5be8b385a7517b8930e8b30ff0d5cabd81be015b26da9649,PodSandboxId:9ab9eff506f73aa947624accb9694fb47ae9410e5729aebf45e3faa29b51586a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722619732568873183,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4df5f76-bb9c-40a3-b0db-14ac7972a88f,},Annotations:map[string]string{io.kubernetes.container.hash: d5774e50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd173b9de65233df543ccbe56ec279179c4d707ab3764872d0bbf1188995bd3,PodSandboxId:5863a810d3cae8dd863a9f250648cc94ea09bc0a8eb155d95087dbe4c87dbba0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722619729060551528,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sk9vd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3173627-759d-4a33-bb57-808ee415d0c5,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54f0d9c8ff29df8157867f63de207612ea99b2723567955d05135014303538c,PodSandboxId:6c53e375d81c8365f0d0e5e0048683abafc5e6ec01726d01e54bb317a2dd657e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722619727093009673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54c9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd068d1d-f377-4c1f-b13b-45c1df8b4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6701c72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad1f1d7d140c75d74d0c58613bbbc1f088e6e00803d0fa2bbc4c4327b5aca2f9,PodSandboxId:2c551c1f8ac4561b470c5c4b2412d4ea119df5dc2ff769c40410eb1217c5ce87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722619706380858141,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9bd0649440d8a7a0d3586b7c1ed3f8,},Annotations:map[string]string{io.kubernetes.container.hash: b2751d44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9b12187ebce275afbbd1f90da2a34131379c6e1b57c0f0c6d6e5b7373a8ef6,PodSandboxId:a5207cbdabdb7aa1b7356b4a13a80d0b0557878f3ae8010b053dbea7cf39fede,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722619706309891757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e815c1f6abe53ec260e4ea81309e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b777f9568ba74a8b254bd1c9d44d99a014205335a86a1d6a1626662be88edd,PodSandboxId:da6052a1b07a528db074b430065d432725a64212f473be92902437c0195dfaff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722619706331682857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d001318cb520bd66242c1c022a2feb0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e5b4ce630c7553e859a8a23cb6c2a4d2fe9022324b3c7504826789757a2ca,PodSandboxId:40c82970196e3b3fa0f8740a4d529d9b150c277f8322341b3a4ff1ed295cf89d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722619706296440181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc4102938a41902a715ba0b7b11dc9f6,},Annotations:map[string]string{io.kubernetes.container.hash: f045d02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d77457ac-3dd8-43ef-bfa1-b0810fc6219f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.645463759Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=807f4596-7c3d-425e-9a53-c46f4325eac2 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.645538338Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=807f4596-7c3d-425e-9a53-c46f4325eac2 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.646938370Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe0d1361-cab6-40cd-8903-0d90601b314e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.648248793Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722620209648223199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe0d1361-cab6-40cd-8903-0d90601b314e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.648965867Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc45fceb-07cd-45cd-a413-8e754b94306b name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.649076140Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc45fceb-07cd-45cd-a413-8e754b94306b name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:36:49 addons-892214 crio[687]: time="2024-08-02 17:36:49.649378482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e10b59d4eecb74abe0df5a061564fe8277241c52c45154450138bd4e44fa831,PodSandboxId:a2836adc9b342a37c7d517a440c55235d1eec6a9596d97f5d5efd302cadcac50,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722620006890719981,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-m5mgj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9deaf7ef-f897-44a9-a367-3c6c60bb68fc,},Annotations:map[string]string{io.kubernetes.container.hash: d18b7951,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37a277059d0f1883c326b05f898cd9893b36e490b43f833dc365fad150063640,PodSandboxId:9754415aeff5d053a49895b7801508b7d3317e01472faa321572fe1143554b06,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722619865460114754,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc59e354-2e50-4658-9768-c1a886aff1aa,},Annotations:map[string]string{io.kubernet
es.container.hash: d36b3d16,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6adf8237f42cef5716242c66f6c3714a887a31742ac8235490a44eaa5341302,PodSandboxId:50ae3db6a32299f52784de81a8c2562b2f13665e9a59fbb2630e891989413348,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722619816993617258,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: da3a730e-656e-41dc-9
be1-768d8d360cb8,},Annotations:map[string]string{io.kubernetes.container.hash: 70f337e3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebc277ab8447cd0197fdb7efbfbb840a7d05fd1175186bf8c40541a8c73cdbd2,PodSandboxId:edb501abf8809a4819c9f3ebf7a1c885c001d8f0ab150e849cb08b7859e73d8c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1722619780573079093,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-4ghlh,io.kubernetes.pod.names
pace: local-path-storage,io.kubernetes.pod.uid: c9f16559-1e63-465e-8e8e-47fcf6b7535d,},Annotations:map[string]string{io.kubernetes.container.hash: c3a47647,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab25730f9ac6e2cc5fe0f1219e2ff23d173087436602c41f016cfeaa21cfa230,PodSandboxId:5a6106fe3cb999308d278a901379e243d5852e5909e5f2e8e6168dc4265cf702,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722619771349736858,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metri
cs-server-c59844bb4-smv7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea8885b-a830-4d58-80b8-a67cc4f26748,},Annotations:map[string]string{io.kubernetes.container.hash: 6d64523a,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01ce9fb7b9bc996c5be8b385a7517b8930e8b30ff0d5cabd81be015b26da9649,PodSandboxId:9ab9eff506f73aa947624accb9694fb47ae9410e5729aebf45e3faa29b51586a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722619732568873183,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4df5f76-bb9c-40a3-b0db-14ac7972a88f,},Annotations:map[string]string{io.kubernetes.container.hash: d5774e50,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd173b9de65233df543ccbe56ec279179c4d707ab3764872d0bbf1188995bd3,PodSandboxId:5863a810d3cae8dd863a9f250648cc94ea09bc0a8eb155d95087dbe4c87dbba0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722619729060551528,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sk9vd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3173627-759d-4a33-bb57-808ee415d0c5,},Annotations:map[string]string{io.kubernetes.container.hash: e6e4551,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d54f0d9c8ff29df8157867f63de207612ea99b2723567955d05135014303538c,PodSandboxId:6c53e375d81c8365f0d0e5e0048683abafc5e6ec01726d01e54bb317a2dd657e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722619727093009673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54c9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd068d1d-f377-4c1f-b13b-45c1df8b4eb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6701c72a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad1f1d7d140c75d74d0c58613bbbc1f088e6e00803d0fa2bbc4c4327b5aca2f9,PodSandboxId:2c551c1f8ac4561b470c5c4b2412d4ea119df5dc2ff769c40410eb1217c5ce87,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722619706380858141,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9bd0649440d8a7a0d3586b7c1ed3f8,},Annotations:map[string]string{io.kubernetes.container.hash: b2751d44,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce9b12187ebce275afbbd1f90da2a34131379c6e1b57c0f0c6d6e5b7373a8ef6,PodSandboxId:a5207cbdabdb7aa1b7356b4a13a80d0b0557878f3ae8010b053dbea7cf39fede,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09
caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722619706309891757,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e815c1f6abe53ec260e4ea81309e6e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68b777f9568ba74a8b254bd1c9d44d99a014205335a86a1d6a1626662be88edd,PodSandboxId:da6052a1b07a528db074b430065d432725a64212f473be92902437c0195dfaff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466d
d273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722619706331682857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d001318cb520bd66242c1c022a2feb0,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:607e5b4ce630c7553e859a8a23cb6c2a4d2fe9022324b3c7504826789757a2ca,PodSandboxId:40c82970196e3b3fa0f8740a4d529d9b150c277f8322341b3a4ff1ed295cf89d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4
c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722619706296440181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-892214,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc4102938a41902a715ba0b7b11dc9f6,},Annotations:map[string]string{io.kubernetes.container.hash: f045d02b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc45fceb-07cd-45cd-a413-8e754b94306b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e10b59d4eecb       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   a2836adc9b342       hello-world-app-6778b5fc9f-m5mgj
	37a277059d0f1       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   9754415aeff5d       nginx
	b6adf8237f42c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   50ae3db6a3229       busybox
	ebc277ab8447c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   edb501abf8809       local-path-provisioner-8d985888d-4ghlh
	ab25730f9ac6e       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   5a6106fe3cb99       metrics-server-c59844bb4-smv7j
	01ce9fb7b9bc9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   9ab9eff506f73       storage-provisioner
	4dd173b9de652       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   5863a810d3cae       coredns-7db6d8ff4d-sk9vd
	d54f0d9c8ff29       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        8 minutes ago       Running             kube-proxy                0                   6c53e375d81c8       kube-proxy-54c9t
	ad1f1d7d140c7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   2c551c1f8ac45       etcd-addons-892214
	68b777f9568ba       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        8 minutes ago       Running             kube-controller-manager   0                   da6052a1b07a5       kube-controller-manager-addons-892214
	ce9b12187ebce       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        8 minutes ago       Running             kube-scheduler            0                   a5207cbdabdb7       kube-scheduler-addons-892214
	607e5b4ce630c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        8 minutes ago       Running             kube-apiserver            0                   40c82970196e3       kube-apiserver-addons-892214
	
	
	==> coredns [4dd173b9de65233df543ccbe56ec279179c4d707ab3764872d0bbf1188995bd3] <==
	[INFO] 10.244.0.7:57689 - 63838 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000590055s
	[INFO] 10.244.0.7:46225 - 55472 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000168109s
	[INFO] 10.244.0.7:46225 - 14770 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000259057s
	[INFO] 10.244.0.7:38115 - 17924 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058774s
	[INFO] 10.244.0.7:38115 - 49210 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103759s
	[INFO] 10.244.0.7:44593 - 4935 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000087892s
	[INFO] 10.244.0.7:44593 - 15937 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116761s
	[INFO] 10.244.0.7:51451 - 15065 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000116559s
	[INFO] 10.244.0.7:51451 - 3525 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00020012s
	[INFO] 10.244.0.7:51704 - 29619 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114502s
	[INFO] 10.244.0.7:51704 - 4017 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000205747s
	[INFO] 10.244.0.7:56800 - 26563 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090366s
	[INFO] 10.244.0.7:56800 - 17613 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000044849s
	[INFO] 10.244.0.7:57131 - 63757 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000053783s
	[INFO] 10.244.0.7:57131 - 39947 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000097075s
	[INFO] 10.244.0.22:36936 - 63718 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00050466s
	[INFO] 10.244.0.22:43964 - 21398 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000152662s
	[INFO] 10.244.0.22:39439 - 29347 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013128s
	[INFO] 10.244.0.22:40314 - 41123 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101775s
	[INFO] 10.244.0.22:45232 - 37518 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092809s
	[INFO] 10.244.0.22:57465 - 56830 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091685s
	[INFO] 10.244.0.22:43500 - 39329 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000364671s
	[INFO] 10.244.0.22:38717 - 14406 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000755682s
	[INFO] 10.244.0.26:46232 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000457151s
	[INFO] 10.244.0.26:50015 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000104755s
	
	
	==> describe nodes <==
	Name:               addons-892214
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-892214
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=addons-892214
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T17_28_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-892214
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:28:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-892214
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:36:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:33:37 +0000   Fri, 02 Aug 2024 17:28:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:33:37 +0000   Fri, 02 Aug 2024 17:28:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:33:37 +0000   Fri, 02 Aug 2024 17:28:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:33:37 +0000   Fri, 02 Aug 2024 17:28:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    addons-892214
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 64bd5dd688f344e499a7dc3b368671c8
	  System UUID:                64bd5dd6-88f3-44e4-99a7-dc3b368671c8
	  Boot ID:                    88934d9c-d3a5-495c-b37a-7f71b825103a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  default                     hello-world-app-6778b5fc9f-m5mgj          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                 coredns-7db6d8ff4d-sk9vd                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m3s
	  kube-system                 etcd-addons-892214                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m18s
	  kube-system                 kube-apiserver-addons-892214              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	  kube-system                 kube-controller-manager-addons-892214     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	  kube-system                 kube-proxy-54c9t                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m4s
	  kube-system                 kube-scheduler-addons-892214              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m18s
	  kube-system                 metrics-server-c59844bb4-smv7j            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m58s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  local-path-storage          local-path-provisioner-8d985888d-4ghlh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m1s   kube-proxy       
	  Normal  Starting                 8m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m18s  kubelet          Node addons-892214 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m18s  kubelet          Node addons-892214 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m18s  kubelet          Node addons-892214 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m17s  kubelet          Node addons-892214 status is now: NodeReady
	  Normal  RegisteredNode           8m4s   node-controller  Node addons-892214 event: Registered Node addons-892214 in Controller
	
	
	==> dmesg <==
	[  +5.077315] kauditd_printk_skb: 125 callbacks suppressed
	[  +5.007679] kauditd_printk_skb: 132 callbacks suppressed
	[Aug 2 17:29] kauditd_printk_skb: 66 callbacks suppressed
	[ +24.299050] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.184468] kauditd_printk_skb: 32 callbacks suppressed
	[ +19.423020] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.040766] kauditd_printk_skb: 45 callbacks suppressed
	[Aug 2 17:30] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.256749] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.579427] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.204111] kauditd_printk_skb: 48 callbacks suppressed
	[ +24.964049] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.456287] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.927033] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.645032] kauditd_printk_skb: 43 callbacks suppressed
	[Aug 2 17:31] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.836290] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.034263] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.707953] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.818538] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.331447] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.430817] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.658139] kauditd_printk_skb: 30 callbacks suppressed
	[Aug 2 17:33] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.268032] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ad1f1d7d140c75d74d0c58613bbbc1f088e6e00803d0fa2bbc4c4327b5aca2f9] <==
	{"level":"info","ts":"2024-08-02T17:30:08.142139Z","caller":"traceutil/trace.go:171","msg":"trace[1369620422] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"451.900659ms","start":"2024-08-02T17:30:07.690224Z","end":"2024-08-02T17:30:08.142125Z","steps":["trace[1369620422] 'process raft request'  (duration: 451.797358ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:08.142293Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:30:07.690205Z","time spent":"452.029573ms","remote":"127.0.0.1:38152","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1111 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-08-02T17:30:08.142681Z","caller":"traceutil/trace.go:171","msg":"trace[35641469] linearizableReadLoop","detail":"{readStateIndex:1166; appliedIndex:1166; }","duration":"311.299357ms","start":"2024-08-02T17:30:07.831372Z","end":"2024-08-02T17:30:08.142672Z","steps":["trace[35641469] 'read index received'  (duration: 311.296218ms)","trace[35641469] 'applied index is now lower than readState.Index'  (duration: 2.487µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T17:30:08.142755Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"311.374229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T17:30:08.142829Z","caller":"traceutil/trace.go:171","msg":"trace[398693944] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1131; }","duration":"311.474427ms","start":"2024-08-02T17:30:07.831349Z","end":"2024-08-02T17:30:08.142823Z","steps":["trace[398693944] 'agreement among raft nodes before linearized reading'  (duration: 311.379284ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:08.142852Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:30:07.831329Z","time spent":"311.517092ms","remote":"127.0.0.1:38152","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":27,"request content":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" "}
	{"level":"warn","ts":"2024-08-02T17:30:08.144809Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.954433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85513"}
	{"level":"info","ts":"2024-08-02T17:30:08.145141Z","caller":"traceutil/trace.go:171","msg":"trace[141671573] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1132; }","duration":"309.308888ms","start":"2024-08-02T17:30:07.83582Z","end":"2024-08-02T17:30:08.145129Z","steps":["trace[141671573] 'agreement among raft nodes before linearized reading'  (duration: 308.830642ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:08.145277Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:30:07.835807Z","time spent":"309.456452ms","remote":"127.0.0.1:48198","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":85535,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-08-02T17:30:08.14548Z","caller":"traceutil/trace.go:171","msg":"trace[1912328206] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"276.675583ms","start":"2024-08-02T17:30:07.868793Z","end":"2024-08-02T17:30:08.145468Z","steps":["trace[1912328206] 'process raft request'  (duration: 275.705732ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:08.145741Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.183993ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11441"}
	{"level":"info","ts":"2024-08-02T17:30:08.146126Z","caller":"traceutil/trace.go:171","msg":"trace[1759529977] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1132; }","duration":"132.571052ms","start":"2024-08-02T17:30:08.013548Z","end":"2024-08-02T17:30:08.146119Z","steps":["trace[1759529977] 'agreement among raft nodes before linearized reading'  (duration: 132.095628ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:08.145783Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.5029ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T17:30:08.146434Z","caller":"traceutil/trace.go:171","msg":"trace[2094260723] range","detail":"{range_begin:/registry/mutatingwebhookconfigurations/; range_end:/registry/mutatingwebhookconfigurations0; response_count:0; response_revision:1132; }","duration":"172.163278ms","start":"2024-08-02T17:30:07.974255Z","end":"2024-08-02T17:30:08.146419Z","steps":["trace[2094260723] 'agreement among raft nodes before linearized reading'  (duration: 171.506596ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T17:30:13.359928Z","caller":"traceutil/trace.go:171","msg":"trace[635625109] linearizableReadLoop","detail":"{readStateIndex:1196; appliedIndex:1195; }","duration":"128.815872ms","start":"2024-08-02T17:30:13.2311Z","end":"2024-08-02T17:30:13.359915Z","steps":["trace[635625109] 'read index received'  (duration: 128.692264ms)","trace[635625109] 'applied index is now lower than readState.Index'  (duration: 123.229µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T17:30:13.360202Z","caller":"traceutil/trace.go:171","msg":"trace[1245398561] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"205.519885ms","start":"2024-08-02T17:30:13.154669Z","end":"2024-08-02T17:30:13.360189Z","steps":["trace[1245398561] 'process raft request'  (duration: 205.161227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:30:13.360376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.260287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T17:30:13.3604Z","caller":"traceutil/trace.go:171","msg":"trace[1348319456] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:1160; }","duration":"129.318179ms","start":"2024-08-02T17:30:13.231076Z","end":"2024-08-02T17:30:13.360394Z","steps":["trace[1348319456] 'agreement among raft nodes before linearized reading'  (duration: 129.253109ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T17:30:43.523122Z","caller":"traceutil/trace.go:171","msg":"trace[332859129] transaction","detail":"{read_only:false; response_revision:1315; number_of_response:1; }","duration":"100.725812ms","start":"2024-08-02T17:30:43.422363Z","end":"2024-08-02T17:30:43.523089Z","steps":["trace[332859129] 'process raft request'  (duration: 100.639758ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T17:31:10.949934Z","caller":"traceutil/trace.go:171","msg":"trace[1870216115] linearizableReadLoop","detail":"{readStateIndex:1567; appliedIndex:1566; }","duration":"331.058036ms","start":"2024-08-02T17:31:10.618849Z","end":"2024-08-02T17:31:10.949907Z","steps":["trace[1870216115] 'read index received'  (duration: 330.896049ms)","trace[1870216115] 'applied index is now lower than readState.Index'  (duration: 161.24µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T17:31:10.950136Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.239487ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:11390"}
	{"level":"info","ts":"2024-08-02T17:31:10.950172Z","caller":"traceutil/trace.go:171","msg":"trace[1742876273] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:1511; }","duration":"331.339504ms","start":"2024-08-02T17:31:10.618824Z","end":"2024-08-02T17:31:10.950164Z","steps":["trace[1742876273] 'agreement among raft nodes before linearized reading'  (duration: 331.175428ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:31:10.950198Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:31:10.618812Z","time spent":"331.376995ms","remote":"127.0.0.1:48198","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":11412,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-08-02T17:31:10.9503Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:31:10.541343Z","time spent":"408.9525ms","remote":"127.0.0.1:48032","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-08-02T17:32:24.365777Z","caller":"traceutil/trace.go:171","msg":"trace[2131580234] transaction","detail":"{read_only:false; response_revision:1902; number_of_response:1; }","duration":"207.028606ms","start":"2024-08-02T17:32:24.158704Z","end":"2024-08-02T17:32:24.365732Z","steps":["trace[2131580234] 'process raft request'  (duration: 206.902437ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:36:50 up 8 min,  0 users,  load average: 0.10, 0.57, 0.42
	Linux addons-892214 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [607e5b4ce630c7553e859a8a23cb6c2a4d2fe9022324b3c7504826789757a2ca] <==
	E0802 17:30:42.003611       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 17:30:42.021552       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0802 17:31:01.153890       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0802 17:31:01.322963       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.33.94"}
	I0802 17:31:02.954552       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0802 17:31:03.988682       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0802 17:31:16.881794       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.4:8443->10.244.0.30:38758: read: connection reset by peer
	I0802 17:31:19.696275       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0802 17:31:43.152522       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.227.161"}
	I0802 17:31:53.012768       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0802 17:31:53.012854       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0802 17:31:53.042200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0802 17:31:53.042261       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0802 17:31:53.061845       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0802 17:31:53.062061       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0802 17:31:53.065365       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0802 17:31:53.065497       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0802 17:31:53.097062       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0802 17:31:53.097111       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0802 17:31:54.062738       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0802 17:31:54.098130       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0802 17:31:54.109028       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0802 17:33:24.250032       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.126.37"}
	E0802 17:33:26.208495       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [68b777f9568ba74a8b254bd1c9d44d99a014205335a86a1d6a1626662be88edd] <==
	W0802 17:34:23.994089       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:34:23.994134       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:34:30.063226       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:34:30.063390       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:34:52.355795       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:34:52.355846       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:35:00.727773       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:35:00.727928       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:35:16.324516       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:35:16.324609       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:35:20.019957       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:35:20.020131       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:35:34.884494       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:35:34.884697       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:35:48.403307       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:35:48.403371       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:35:59.571635       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:35:59.571703       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:36:02.488930       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:36:02.489039       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:36:11.213303       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:36:11.213466       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0802 17:36:27.786149       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0802 17:36:27.786314       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0802 17:36:48.689317       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="8.732µs"
	
	
	==> kube-proxy [d54f0d9c8ff29df8157867f63de207612ea99b2723567955d05135014303538c] <==
	I0802 17:28:47.963981       1 server_linux.go:69] "Using iptables proxy"
	I0802 17:28:47.987118       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	I0802 17:28:48.064742       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 17:28:48.064800       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 17:28:48.064820       1 server_linux.go:165] "Using iptables Proxier"
	I0802 17:28:48.068312       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 17:28:48.068491       1 server.go:872] "Version info" version="v1.30.3"
	I0802 17:28:48.068502       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:28:48.069833       1 config.go:192] "Starting service config controller"
	I0802 17:28:48.069845       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 17:28:48.069868       1 config.go:101] "Starting endpoint slice config controller"
	I0802 17:28:48.069871       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 17:28:48.070325       1 config.go:319] "Starting node config controller"
	I0802 17:28:48.070331       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 17:28:48.170811       1 shared_informer.go:320] Caches are synced for node config
	I0802 17:28:48.170849       1 shared_informer.go:320] Caches are synced for service config
	I0802 17:28:48.170868       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ce9b12187ebce275afbbd1f90da2a34131379c6e1b57c0f0c6d6e5b7373a8ef6] <==
	W0802 17:28:29.014844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 17:28:29.014883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0802 17:28:29.014942       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 17:28:29.014967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 17:28:29.015014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 17:28:29.015036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0802 17:28:29.015167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 17:28:29.015211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 17:28:29.015506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 17:28:29.016663       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 17:28:29.016936       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 17:28:29.017781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 17:28:29.830785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 17:28:29.830898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 17:28:29.832734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0802 17:28:29.832795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0802 17:28:29.912526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 17:28:29.912687       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0802 17:28:29.932556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 17:28:29.932636       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 17:28:30.094375       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 17:28:30.094506       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 17:28:30.194685       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 17:28:30.194748       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0802 17:28:32.702856       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 17:33:32 addons-892214 kubelet[1287]: I0802 17:33:32.146436    1287 scope.go:117] "RemoveContainer" containerID="dfb9a0173c90c8011d7113af56a38827c175cf0732ee7f3da88139b87044544d"
	Aug 02 17:33:32 addons-892214 kubelet[1287]: I0802 17:33:32.169036    1287 scope.go:117] "RemoveContainer" containerID="d21aa0be6fd5feacd6507d5eca90f471380137edab2d6ea54442791f2f79533e"
	Aug 02 17:33:40 addons-892214 kubelet[1287]: I0802 17:33:40.616244    1287 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 02 17:34:31 addons-892214 kubelet[1287]: E0802 17:34:31.632605    1287 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:34:31 addons-892214 kubelet[1287]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:34:31 addons-892214 kubelet[1287]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:34:31 addons-892214 kubelet[1287]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:34:31 addons-892214 kubelet[1287]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:34:54 addons-892214 kubelet[1287]: I0802 17:34:54.616350    1287 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 02 17:35:31 addons-892214 kubelet[1287]: E0802 17:35:31.633038    1287 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:35:31 addons-892214 kubelet[1287]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:35:31 addons-892214 kubelet[1287]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:35:31 addons-892214 kubelet[1287]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:35:31 addons-892214 kubelet[1287]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:36:11 addons-892214 kubelet[1287]: I0802 17:36:11.617342    1287 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 02 17:36:31 addons-892214 kubelet[1287]: E0802 17:36:31.633939    1287 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:36:31 addons-892214 kubelet[1287]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:36:31 addons-892214 kubelet[1287]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:36:31 addons-892214 kubelet[1287]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:36:31 addons-892214 kubelet[1287]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:36:48 addons-892214 kubelet[1287]: I0802 17:36:48.709814    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-m5mgj" podStartSLOduration=202.426867064 podStartE2EDuration="3m24.709781796s" podCreationTimestamp="2024-08-02 17:33:24 +0000 UTC" firstStartedPulling="2024-08-02 17:33:24.597210104 +0000 UTC m=+293.093516936" lastFinishedPulling="2024-08-02 17:33:26.880124824 +0000 UTC m=+295.376431668" observedRunningTime="2024-08-02 17:33:27.54484581 +0000 UTC m=+296.041152665" watchObservedRunningTime="2024-08-02 17:36:48.709781796 +0000 UTC m=+497.206088640"
	Aug 02 17:36:50 addons-892214 kubelet[1287]: I0802 17:36:50.117156    1287 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tsn5\" (UniqueName: \"kubernetes.io/projected/8ea8885b-a830-4d58-80b8-a67cc4f26748-kube-api-access-7tsn5\") pod \"8ea8885b-a830-4d58-80b8-a67cc4f26748\" (UID: \"8ea8885b-a830-4d58-80b8-a67cc4f26748\") "
	Aug 02 17:36:50 addons-892214 kubelet[1287]: I0802 17:36:50.117211    1287 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8ea8885b-a830-4d58-80b8-a67cc4f26748-tmp-dir\") pod \"8ea8885b-a830-4d58-80b8-a67cc4f26748\" (UID: \"8ea8885b-a830-4d58-80b8-a67cc4f26748\") "
	Aug 02 17:36:50 addons-892214 kubelet[1287]: I0802 17:36:50.117618    1287 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea8885b-a830-4d58-80b8-a67cc4f26748-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "8ea8885b-a830-4d58-80b8-a67cc4f26748" (UID: "8ea8885b-a830-4d58-80b8-a67cc4f26748"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 02 17:36:50 addons-892214 kubelet[1287]: I0802 17:36:50.120763    1287 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea8885b-a830-4d58-80b8-a67cc4f26748-kube-api-access-7tsn5" (OuterVolumeSpecName: "kube-api-access-7tsn5") pod "8ea8885b-a830-4d58-80b8-a67cc4f26748" (UID: "8ea8885b-a830-4d58-80b8-a67cc4f26748"). InnerVolumeSpecName "kube-api-access-7tsn5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	
	
	==> storage-provisioner [01ce9fb7b9bc996c5be8b385a7517b8930e8b30ff0d5cabd81be015b26da9649] <==
	I0802 17:28:53.347445       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 17:28:53.545122       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 17:28:53.545191       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 17:28:53.694248       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 17:28:53.694757       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"125d3a2d-c910-47bf-b476-a112f54d5bfb", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-892214_fd75a1f7-8e7e-475d-80c0-f9f6b9f743bc became leader
	I0802 17:28:53.694949       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-892214_fd75a1f7-8e7e-475d-80c0-f9f6b9f743bc!
	I0802 17:28:53.796547       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-892214_fd75a1f7-8e7e-475d-80c0-f9f6b9f743bc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-892214 -n addons-892214
helpers_test.go:261: (dbg) Run:  kubectl --context addons-892214 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (366.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-892214
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-892214: exit status 82 (2m0.446751633s)

                                                
                                                
-- stdout --
	* Stopping node "addons-892214"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-892214" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-892214
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-892214: exit status 11 (21.500900692s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.4:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-892214" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-892214
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-892214: exit status 11 (6.143257543s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.4:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-892214" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-892214
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-892214: exit status 11 (6.143940818s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.4:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-892214" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 node stop m02 -v=7 --alsologtostderr
E0802 17:48:24.888542   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:49:05.849817   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652395 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.465090542s)

                                                
                                                
-- stdout --
	* Stopping node "ha-652395-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:48:10.154956   27452 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:48:10.155446   27452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:48:10.155502   27452 out.go:304] Setting ErrFile to fd 2...
	I0802 17:48:10.155520   27452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:48:10.155996   27452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:48:10.156611   27452 mustload.go:65] Loading cluster: ha-652395
	I0802 17:48:10.156958   27452 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:48:10.156978   27452 stop.go:39] StopHost: ha-652395-m02
	I0802 17:48:10.157302   27452 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:48:10.157347   27452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:48:10.172902   27452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I0802 17:48:10.173355   27452 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:48:10.174045   27452 main.go:141] libmachine: Using API Version  1
	I0802 17:48:10.174067   27452 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:48:10.174446   27452 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:48:10.176575   27452 out.go:177] * Stopping node "ha-652395-m02"  ...
	I0802 17:48:10.177729   27452 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0802 17:48:10.177764   27452 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:48:10.177982   27452 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0802 17:48:10.178005   27452 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:48:10.180556   27452 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:48:10.180958   27452 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:48:10.180987   27452 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:48:10.181130   27452 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:48:10.181290   27452 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:48:10.181408   27452 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:48:10.181531   27452 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	I0802 17:48:10.262098   27452 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0802 17:48:10.317218   27452 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0802 17:48:10.372255   27452 main.go:141] libmachine: Stopping "ha-652395-m02"...
	I0802 17:48:10.372285   27452 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 17:48:10.373861   27452 main.go:141] libmachine: (ha-652395-m02) Calling .Stop
	I0802 17:48:10.377926   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 0/120
	I0802 17:48:11.379315   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 1/120
	I0802 17:48:12.380516   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 2/120
	I0802 17:48:13.381832   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 3/120
	I0802 17:48:14.382986   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 4/120
	I0802 17:48:15.385335   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 5/120
	I0802 17:48:16.387017   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 6/120
	I0802 17:48:17.388658   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 7/120
	I0802 17:48:18.390027   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 8/120
	I0802 17:48:19.391490   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 9/120
	I0802 17:48:20.392765   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 10/120
	I0802 17:48:21.394751   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 11/120
	I0802 17:48:22.396253   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 12/120
	I0802 17:48:23.397770   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 13/120
	I0802 17:48:24.399172   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 14/120
	I0802 17:48:25.401030   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 15/120
	I0802 17:48:26.402367   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 16/120
	I0802 17:48:27.403987   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 17/120
	I0802 17:48:28.405673   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 18/120
	I0802 17:48:29.406940   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 19/120
	I0802 17:48:30.409012   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 20/120
	I0802 17:48:31.410914   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 21/120
	I0802 17:48:32.412678   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 22/120
	I0802 17:48:33.414099   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 23/120
	I0802 17:48:34.415632   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 24/120
	I0802 17:48:35.417657   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 25/120
	I0802 17:48:36.419133   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 26/120
	I0802 17:48:37.420699   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 27/120
	I0802 17:48:38.422506   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 28/120
	I0802 17:48:39.424076   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 29/120
	I0802 17:48:40.426208   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 30/120
	I0802 17:48:41.427828   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 31/120
	I0802 17:48:42.429287   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 32/120
	I0802 17:48:43.430939   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 33/120
	I0802 17:48:44.433123   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 34/120
	I0802 17:48:45.435126   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 35/120
	I0802 17:48:46.437746   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 36/120
	I0802 17:48:47.440469   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 37/120
	I0802 17:48:48.441793   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 38/120
	I0802 17:48:49.443548   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 39/120
	I0802 17:48:50.445628   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 40/120
	I0802 17:48:51.446990   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 41/120
	I0802 17:48:52.448669   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 42/120
	I0802 17:48:53.450148   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 43/120
	I0802 17:48:54.451679   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 44/120
	I0802 17:48:55.453507   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 45/120
	I0802 17:48:56.454878   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 46/120
	I0802 17:48:57.457005   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 47/120
	I0802 17:48:58.458604   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 48/120
	I0802 17:48:59.459981   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 49/120
	I0802 17:49:00.461849   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 50/120
	I0802 17:49:01.463180   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 51/120
	I0802 17:49:02.464664   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 52/120
	I0802 17:49:03.466177   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 53/120
	I0802 17:49:04.467581   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 54/120
	I0802 17:49:05.469261   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 55/120
	I0802 17:49:06.470649   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 56/120
	I0802 17:49:07.472152   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 57/120
	I0802 17:49:08.473443   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 58/120
	I0802 17:49:09.475405   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 59/120
	I0802 17:49:10.477421   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 60/120
	I0802 17:49:11.478768   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 61/120
	I0802 17:49:12.480632   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 62/120
	I0802 17:49:13.482062   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 63/120
	I0802 17:49:14.483665   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 64/120
	I0802 17:49:15.485696   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 65/120
	I0802 17:49:16.487548   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 66/120
	I0802 17:49:17.489434   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 67/120
	I0802 17:49:18.491045   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 68/120
	I0802 17:49:19.493222   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 69/120
	I0802 17:49:20.494986   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 70/120
	I0802 17:49:21.497248   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 71/120
	I0802 17:49:22.498752   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 72/120
	I0802 17:49:23.500230   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 73/120
	I0802 17:49:24.501664   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 74/120
	I0802 17:49:25.503388   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 75/120
	I0802 17:49:26.505493   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 76/120
	I0802 17:49:27.507085   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 77/120
	I0802 17:49:28.508665   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 78/120
	I0802 17:49:29.510226   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 79/120
	I0802 17:49:30.512087   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 80/120
	I0802 17:49:31.514181   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 81/120
	I0802 17:49:32.515859   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 82/120
	I0802 17:49:33.517939   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 83/120
	I0802 17:49:34.519411   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 84/120
	I0802 17:49:35.521210   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 85/120
	I0802 17:49:36.522487   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 86/120
	I0802 17:49:37.523867   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 87/120
	I0802 17:49:38.525667   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 88/120
	I0802 17:49:39.526784   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 89/120
	I0802 17:49:40.529036   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 90/120
	I0802 17:49:41.530888   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 91/120
	I0802 17:49:42.532181   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 92/120
	I0802 17:49:43.533700   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 93/120
	I0802 17:49:44.535399   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 94/120
	I0802 17:49:45.537349   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 95/120
	I0802 17:49:46.538789   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 96/120
	I0802 17:49:47.540050   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 97/120
	I0802 17:49:48.541375   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 98/120
	I0802 17:49:49.542842   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 99/120
	I0802 17:49:50.545155   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 100/120
	I0802 17:49:51.547569   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 101/120
	I0802 17:49:52.549727   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 102/120
	I0802 17:49:53.551317   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 103/120
	I0802 17:49:54.552671   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 104/120
	I0802 17:49:55.554219   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 105/120
	I0802 17:49:56.555714   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 106/120
	I0802 17:49:57.557613   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 107/120
	I0802 17:49:58.559005   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 108/120
	I0802 17:49:59.560265   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 109/120
	I0802 17:50:00.562301   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 110/120
	I0802 17:50:01.564291   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 111/120
	I0802 17:50:02.565704   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 112/120
	I0802 17:50:03.567122   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 113/120
	I0802 17:50:04.568729   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 114/120
	I0802 17:50:05.570659   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 115/120
	I0802 17:50:06.572083   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 116/120
	I0802 17:50:07.573495   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 117/120
	I0802 17:50:08.575774   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 118/120
	I0802 17:50:09.577129   27452 main.go:141] libmachine: (ha-652395-m02) Waiting for machine to stop 119/120
	I0802 17:50:10.578223   27452 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0802 17:50:10.578431   27452 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-652395 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
E0802 17:50:14.261751   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:50:27.770203   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr: exit status 3 (19.044609636s)

                                                
                                                
-- stdout --
	ha-652395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-652395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:50:10.627358   27867 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:50:10.627519   27867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:50:10.627530   27867 out.go:304] Setting ErrFile to fd 2...
	I0802 17:50:10.627535   27867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:50:10.627738   27867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:50:10.627986   27867 out.go:298] Setting JSON to false
	I0802 17:50:10.628011   27867 mustload.go:65] Loading cluster: ha-652395
	I0802 17:50:10.628117   27867 notify.go:220] Checking for updates...
	I0802 17:50:10.628481   27867 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:50:10.628499   27867 status.go:255] checking status of ha-652395 ...
	I0802 17:50:10.628921   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:10.628993   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:10.649023   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41901
	I0802 17:50:10.649567   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:10.650129   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:10.650146   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:10.650555   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:10.650740   27867 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:50:10.652453   27867 status.go:330] ha-652395 host status = "Running" (err=<nil>)
	I0802 17:50:10.652485   27867 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:50:10.652894   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:10.652932   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:10.667633   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37867
	I0802 17:50:10.668078   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:10.668565   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:10.668587   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:10.668906   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:10.669078   27867 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:50:10.671602   27867 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:10.672084   27867 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:50:10.672130   27867 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:10.672286   27867 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:50:10.672666   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:10.672712   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:10.688776   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0802 17:50:10.689199   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:10.689624   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:10.689645   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:10.689962   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:10.690128   27867 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:50:10.690319   27867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:10.690350   27867 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:50:10.693088   27867 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:10.693515   27867 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:50:10.693541   27867 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:10.693638   27867 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:50:10.693817   27867 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:50:10.693954   27867 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:50:10.694103   27867 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:50:10.776040   27867 ssh_runner.go:195] Run: systemctl --version
	I0802 17:50:10.782818   27867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:10.799845   27867 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:50:10.799870   27867 api_server.go:166] Checking apiserver status ...
	I0802 17:50:10.799905   27867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:50:10.813947   27867 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup
	W0802 17:50:10.822628   27867 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:50:10.822674   27867 ssh_runner.go:195] Run: ls
	I0802 17:50:10.826727   27867 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:50:10.832739   27867 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:50:10.832763   27867 status.go:422] ha-652395 apiserver status = Running (err=<nil>)
	I0802 17:50:10.832771   27867 status.go:257] ha-652395 status: &{Name:ha-652395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:50:10.832794   27867 status.go:255] checking status of ha-652395-m02 ...
	I0802 17:50:10.833102   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:10.833143   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:10.847588   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34307
	I0802 17:50:10.848014   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:10.848468   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:10.848497   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:10.848819   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:10.849002   27867 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 17:50:10.850358   27867 status.go:330] ha-652395-m02 host status = "Running" (err=<nil>)
	I0802 17:50:10.850371   27867 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:50:10.850691   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:10.850728   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:10.865846   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I0802 17:50:10.866253   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:10.866741   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:10.866762   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:10.867094   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:10.867303   27867 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:50:10.869860   27867 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:10.870427   27867 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:50:10.870459   27867 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:10.870599   27867 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:50:10.870963   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:10.871015   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:10.886607   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I0802 17:50:10.886990   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:10.887504   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:10.887530   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:10.887891   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:10.888070   27867 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:50:10.888312   27867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:10.888331   27867 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:50:10.890727   27867 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:10.891084   27867 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:50:10.891132   27867 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:10.891275   27867 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:50:10.891461   27867 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:50:10.891613   27867 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:50:10.891744   27867 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	W0802 17:50:29.275349   27867 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.220:22: connect: no route to host
	W0802 17:50:29.275452   27867 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	E0802 17:50:29.275474   27867 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:29.275495   27867 status.go:257] ha-652395-m02 status: &{Name:ha-652395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0802 17:50:29.275517   27867 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:29.275524   27867 status.go:255] checking status of ha-652395-m03 ...
	I0802 17:50:29.275842   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:29.275915   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:29.290607   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32827
	I0802 17:50:29.291077   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:29.291524   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:29.291546   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:29.291887   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:29.292049   27867 main.go:141] libmachine: (ha-652395-m03) Calling .GetState
	I0802 17:50:29.293623   27867 status.go:330] ha-652395-m03 host status = "Running" (err=<nil>)
	I0802 17:50:29.293639   27867 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:50:29.293925   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:29.293964   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:29.308739   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36349
	I0802 17:50:29.309172   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:29.309645   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:29.309666   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:29.309919   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:29.310069   27867 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:50:29.312833   27867 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:29.313364   27867 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:50:29.313401   27867 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:29.313615   27867 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:50:29.314040   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:29.314084   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:29.328355   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42203
	I0802 17:50:29.328682   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:29.329098   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:29.329129   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:29.329433   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:29.329612   27867 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:50:29.329801   27867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:29.329823   27867 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:50:29.332418   27867 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:29.332888   27867 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:50:29.332917   27867 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:29.333041   27867 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:50:29.333220   27867 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:50:29.333373   27867 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:50:29.333536   27867 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:50:29.415788   27867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:29.432536   27867 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:50:29.432565   27867 api_server.go:166] Checking apiserver status ...
	I0802 17:50:29.432594   27867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:50:29.447713   27867 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0802 17:50:29.459557   27867 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:50:29.459611   27867 ssh_runner.go:195] Run: ls
	I0802 17:50:29.464465   27867 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:50:29.468790   27867 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:50:29.468816   27867 status.go:422] ha-652395-m03 apiserver status = Running (err=<nil>)
	I0802 17:50:29.468828   27867 status.go:257] ha-652395-m03 status: &{Name:ha-652395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:50:29.468844   27867 status.go:255] checking status of ha-652395-m04 ...
	I0802 17:50:29.469239   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:29.469281   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:29.483866   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39037
	I0802 17:50:29.484290   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:29.484738   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:29.484758   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:29.485104   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:29.485349   27867 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 17:50:29.486984   27867 status.go:330] ha-652395-m04 host status = "Running" (err=<nil>)
	I0802 17:50:29.487000   27867 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:50:29.487435   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:29.487480   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:29.502671   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39715
	I0802 17:50:29.503147   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:29.503576   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:29.503599   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:29.503874   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:29.504057   27867 main.go:141] libmachine: (ha-652395-m04) Calling .GetIP
	I0802 17:50:29.506749   27867 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:29.507223   27867 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:50:29.507245   27867 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:29.507376   27867 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:50:29.507652   27867 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:29.507684   27867 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:29.521700   27867 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36293
	I0802 17:50:29.522092   27867 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:29.522540   27867 main.go:141] libmachine: Using API Version  1
	I0802 17:50:29.522568   27867 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:29.522905   27867 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:29.523081   27867 main.go:141] libmachine: (ha-652395-m04) Calling .DriverName
	I0802 17:50:29.523258   27867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:29.523274   27867 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHHostname
	I0802 17:50:29.526132   27867 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:29.526525   27867 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:50:29.526549   27867 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:29.526739   27867 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHPort
	I0802 17:50:29.526907   27867 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHKeyPath
	I0802 17:50:29.527027   27867 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHUsername
	I0802 17:50:29.527179   27867 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m04/id_rsa Username:docker}
	I0802 17:50:29.607185   27867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:29.622664   27867 status.go:257] ha-652395-m04 status: &{Name:ha-652395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-652395 -n ha-652395
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-652395 logs -n 25: (1.381378246s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2210744680/001/cp-test_ha-652395-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395:/home/docker/cp-test_ha-652395-m03_ha-652395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395 sudo cat                                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m03_ha-652395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m02:/home/docker/cp-test_ha-652395-m03_ha-652395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m02 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m03_ha-652395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04:/home/docker/cp-test_ha-652395-m03_ha-652395-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m04 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m03_ha-652395-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp testdata/cp-test.txt                                                | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2210744680/001/cp-test_ha-652395-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395:/home/docker/cp-test_ha-652395-m04_ha-652395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395 sudo cat                                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m02:/home/docker/cp-test_ha-652395-m04_ha-652395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m02 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03:/home/docker/cp-test_ha-652395-m04_ha-652395-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m03 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-652395 node stop m02 -v=7                                                     | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:43:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:43:27.532885   23378 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:43:27.533001   23378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:43:27.533009   23378 out.go:304] Setting ErrFile to fd 2...
	I0802 17:43:27.533014   23378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:43:27.533193   23378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:43:27.533719   23378 out.go:298] Setting JSON to false
	I0802 17:43:27.534584   23378 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1551,"bootTime":1722619056,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:43:27.534653   23378 start.go:139] virtualization: kvm guest
	I0802 17:43:27.536601   23378 out.go:177] * [ha-652395] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:43:27.537875   23378 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 17:43:27.537935   23378 notify.go:220] Checking for updates...
	I0802 17:43:27.540169   23378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:43:27.541454   23378 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:43:27.542558   23378 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:43:27.543731   23378 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 17:43:27.544829   23378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 17:43:27.546055   23378 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:43:27.579712   23378 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 17:43:27.580856   23378 start.go:297] selected driver: kvm2
	I0802 17:43:27.580872   23378 start.go:901] validating driver "kvm2" against <nil>
	I0802 17:43:27.580894   23378 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 17:43:27.581571   23378 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:43:27.581645   23378 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:43:27.597294   23378 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:43:27.597338   23378 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 17:43:27.597546   23378 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:43:27.597599   23378 cni.go:84] Creating CNI manager for ""
	I0802 17:43:27.597611   23378 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0802 17:43:27.597616   23378 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0802 17:43:27.597669   23378 start.go:340] cluster config:
	{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0802 17:43:27.597769   23378 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:43:27.600220   23378 out.go:177] * Starting "ha-652395" primary control-plane node in "ha-652395" cluster
	I0802 17:43:27.601213   23378 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:43:27.601246   23378 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 17:43:27.601256   23378 cache.go:56] Caching tarball of preloaded images
	I0802 17:43:27.601342   23378 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 17:43:27.601353   23378 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 17:43:27.601668   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:43:27.601693   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json: {Name:mk3e0527528bd55e492678cbdc26edd1c1b05506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:27.601826   23378 start.go:360] acquireMachinesLock for ha-652395: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 17:43:27.601855   23378 start.go:364] duration metric: took 16.128µs to acquireMachinesLock for "ha-652395"
	I0802 17:43:27.601871   23378 start.go:93] Provisioning new machine with config: &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:43:27.601926   23378 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 17:43:27.603424   23378 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 17:43:27.603563   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:43:27.603607   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:43:27.617511   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I0802 17:43:27.617942   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:43:27.618488   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:43:27.618508   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:43:27.618824   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:43:27.619007   23378 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:43:27.619196   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:27.619335   23378 start.go:159] libmachine.API.Create for "ha-652395" (driver="kvm2")
	I0802 17:43:27.619358   23378 client.go:168] LocalClient.Create starting
	I0802 17:43:27.619382   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 17:43:27.619410   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:43:27.619432   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:43:27.619484   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 17:43:27.619502   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:43:27.619515   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:43:27.619530   23378 main.go:141] libmachine: Running pre-create checks...
	I0802 17:43:27.619538   23378 main.go:141] libmachine: (ha-652395) Calling .PreCreateCheck
	I0802 17:43:27.619938   23378 main.go:141] libmachine: (ha-652395) Calling .GetConfigRaw
	I0802 17:43:27.620343   23378 main.go:141] libmachine: Creating machine...
	I0802 17:43:27.620359   23378 main.go:141] libmachine: (ha-652395) Calling .Create
	I0802 17:43:27.620483   23378 main.go:141] libmachine: (ha-652395) Creating KVM machine...
	I0802 17:43:27.621647   23378 main.go:141] libmachine: (ha-652395) DBG | found existing default KVM network
	I0802 17:43:27.622422   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:27.622287   23401 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0802 17:43:27.622460   23378 main.go:141] libmachine: (ha-652395) DBG | created network xml: 
	I0802 17:43:27.622468   23378 main.go:141] libmachine: (ha-652395) DBG | <network>
	I0802 17:43:27.622474   23378 main.go:141] libmachine: (ha-652395) DBG |   <name>mk-ha-652395</name>
	I0802 17:43:27.622481   23378 main.go:141] libmachine: (ha-652395) DBG |   <dns enable='no'/>
	I0802 17:43:27.622493   23378 main.go:141] libmachine: (ha-652395) DBG |   
	I0802 17:43:27.622504   23378 main.go:141] libmachine: (ha-652395) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0802 17:43:27.622515   23378 main.go:141] libmachine: (ha-652395) DBG |     <dhcp>
	I0802 17:43:27.622527   23378 main.go:141] libmachine: (ha-652395) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0802 17:43:27.622539   23378 main.go:141] libmachine: (ha-652395) DBG |     </dhcp>
	I0802 17:43:27.622555   23378 main.go:141] libmachine: (ha-652395) DBG |   </ip>
	I0802 17:43:27.622585   23378 main.go:141] libmachine: (ha-652395) DBG |   
	I0802 17:43:27.622603   23378 main.go:141] libmachine: (ha-652395) DBG | </network>
	I0802 17:43:27.622656   23378 main.go:141] libmachine: (ha-652395) DBG | 
	I0802 17:43:27.627331   23378 main.go:141] libmachine: (ha-652395) DBG | trying to create private KVM network mk-ha-652395 192.168.39.0/24...
	I0802 17:43:27.693211   23378 main.go:141] libmachine: (ha-652395) DBG | private KVM network mk-ha-652395 192.168.39.0/24 created
	I0802 17:43:27.693246   23378 main.go:141] libmachine: (ha-652395) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395 ...
	I0802 17:43:27.693260   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:27.693209   23401 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:43:27.693269   23378 main.go:141] libmachine: (ha-652395) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 17:43:27.693355   23378 main.go:141] libmachine: (ha-652395) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 17:43:27.936362   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:27.936220   23401 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa...
	I0802 17:43:28.110545   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:28.110410   23401 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/ha-652395.rawdisk...
	I0802 17:43:28.110582   23378 main.go:141] libmachine: (ha-652395) DBG | Writing magic tar header
	I0802 17:43:28.110603   23378 main.go:141] libmachine: (ha-652395) DBG | Writing SSH key tar header
	I0802 17:43:28.110615   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:28.110557   23401 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395 ...
	I0802 17:43:28.110702   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395
	I0802 17:43:28.110739   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 17:43:28.110773   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:43:28.110801   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395 (perms=drwx------)
	I0802 17:43:28.110819   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 17:43:28.110838   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 17:43:28.110852   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 17:43:28.110863   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 17:43:28.110881   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 17:43:28.110894   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 17:43:28.110907   23378 main.go:141] libmachine: (ha-652395) Creating domain...
	I0802 17:43:28.110983   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 17:43:28.111025   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins
	I0802 17:43:28.111042   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home
	I0802 17:43:28.111053   23378 main.go:141] libmachine: (ha-652395) DBG | Skipping /home - not owner
	I0802 17:43:28.111908   23378 main.go:141] libmachine: (ha-652395) define libvirt domain using xml: 
	I0802 17:43:28.111926   23378 main.go:141] libmachine: (ha-652395) <domain type='kvm'>
	I0802 17:43:28.111936   23378 main.go:141] libmachine: (ha-652395)   <name>ha-652395</name>
	I0802 17:43:28.111944   23378 main.go:141] libmachine: (ha-652395)   <memory unit='MiB'>2200</memory>
	I0802 17:43:28.111953   23378 main.go:141] libmachine: (ha-652395)   <vcpu>2</vcpu>
	I0802 17:43:28.111960   23378 main.go:141] libmachine: (ha-652395)   <features>
	I0802 17:43:28.111968   23378 main.go:141] libmachine: (ha-652395)     <acpi/>
	I0802 17:43:28.111975   23378 main.go:141] libmachine: (ha-652395)     <apic/>
	I0802 17:43:28.111983   23378 main.go:141] libmachine: (ha-652395)     <pae/>
	I0802 17:43:28.112001   23378 main.go:141] libmachine: (ha-652395)     
	I0802 17:43:28.112010   23378 main.go:141] libmachine: (ha-652395)   </features>
	I0802 17:43:28.112019   23378 main.go:141] libmachine: (ha-652395)   <cpu mode='host-passthrough'>
	I0802 17:43:28.112028   23378 main.go:141] libmachine: (ha-652395)   
	I0802 17:43:28.112035   23378 main.go:141] libmachine: (ha-652395)   </cpu>
	I0802 17:43:28.112044   23378 main.go:141] libmachine: (ha-652395)   <os>
	I0802 17:43:28.112050   23378 main.go:141] libmachine: (ha-652395)     <type>hvm</type>
	I0802 17:43:28.112056   23378 main.go:141] libmachine: (ha-652395)     <boot dev='cdrom'/>
	I0802 17:43:28.112063   23378 main.go:141] libmachine: (ha-652395)     <boot dev='hd'/>
	I0802 17:43:28.112071   23378 main.go:141] libmachine: (ha-652395)     <bootmenu enable='no'/>
	I0802 17:43:28.112078   23378 main.go:141] libmachine: (ha-652395)   </os>
	I0802 17:43:28.112087   23378 main.go:141] libmachine: (ha-652395)   <devices>
	I0802 17:43:28.112102   23378 main.go:141] libmachine: (ha-652395)     <disk type='file' device='cdrom'>
	I0802 17:43:28.112115   23378 main.go:141] libmachine: (ha-652395)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/boot2docker.iso'/>
	I0802 17:43:28.112127   23378 main.go:141] libmachine: (ha-652395)       <target dev='hdc' bus='scsi'/>
	I0802 17:43:28.112135   23378 main.go:141] libmachine: (ha-652395)       <readonly/>
	I0802 17:43:28.112140   23378 main.go:141] libmachine: (ha-652395)     </disk>
	I0802 17:43:28.112147   23378 main.go:141] libmachine: (ha-652395)     <disk type='file' device='disk'>
	I0802 17:43:28.112158   23378 main.go:141] libmachine: (ha-652395)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 17:43:28.112179   23378 main.go:141] libmachine: (ha-652395)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/ha-652395.rawdisk'/>
	I0802 17:43:28.112195   23378 main.go:141] libmachine: (ha-652395)       <target dev='hda' bus='virtio'/>
	I0802 17:43:28.112204   23378 main.go:141] libmachine: (ha-652395)     </disk>
	I0802 17:43:28.112214   23378 main.go:141] libmachine: (ha-652395)     <interface type='network'>
	I0802 17:43:28.112223   23378 main.go:141] libmachine: (ha-652395)       <source network='mk-ha-652395'/>
	I0802 17:43:28.112229   23378 main.go:141] libmachine: (ha-652395)       <model type='virtio'/>
	I0802 17:43:28.112237   23378 main.go:141] libmachine: (ha-652395)     </interface>
	I0802 17:43:28.112248   23378 main.go:141] libmachine: (ha-652395)     <interface type='network'>
	I0802 17:43:28.112260   23378 main.go:141] libmachine: (ha-652395)       <source network='default'/>
	I0802 17:43:28.112273   23378 main.go:141] libmachine: (ha-652395)       <model type='virtio'/>
	I0802 17:43:28.112294   23378 main.go:141] libmachine: (ha-652395)     </interface>
	I0802 17:43:28.112304   23378 main.go:141] libmachine: (ha-652395)     <serial type='pty'>
	I0802 17:43:28.112313   23378 main.go:141] libmachine: (ha-652395)       <target port='0'/>
	I0802 17:43:28.112320   23378 main.go:141] libmachine: (ha-652395)     </serial>
	I0802 17:43:28.112331   23378 main.go:141] libmachine: (ha-652395)     <console type='pty'>
	I0802 17:43:28.112346   23378 main.go:141] libmachine: (ha-652395)       <target type='serial' port='0'/>
	I0802 17:43:28.112364   23378 main.go:141] libmachine: (ha-652395)     </console>
	I0802 17:43:28.112374   23378 main.go:141] libmachine: (ha-652395)     <rng model='virtio'>
	I0802 17:43:28.112386   23378 main.go:141] libmachine: (ha-652395)       <backend model='random'>/dev/random</backend>
	I0802 17:43:28.112395   23378 main.go:141] libmachine: (ha-652395)     </rng>
	I0802 17:43:28.112402   23378 main.go:141] libmachine: (ha-652395)     
	I0802 17:43:28.112410   23378 main.go:141] libmachine: (ha-652395)     
	I0802 17:43:28.112447   23378 main.go:141] libmachine: (ha-652395)   </devices>
	I0802 17:43:28.112471   23378 main.go:141] libmachine: (ha-652395) </domain>
	I0802 17:43:28.112484   23378 main.go:141] libmachine: (ha-652395) 
	I0802 17:43:28.116658   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ed:8e:1b in network default
	I0802 17:43:28.117252   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:28.117265   23378 main.go:141] libmachine: (ha-652395) Ensuring networks are active...
	I0802 17:43:28.117952   23378 main.go:141] libmachine: (ha-652395) Ensuring network default is active
	I0802 17:43:28.118277   23378 main.go:141] libmachine: (ha-652395) Ensuring network mk-ha-652395 is active
	I0802 17:43:28.118803   23378 main.go:141] libmachine: (ha-652395) Getting domain xml...
	I0802 17:43:28.120598   23378 main.go:141] libmachine: (ha-652395) Creating domain...
	I0802 17:43:29.304293   23378 main.go:141] libmachine: (ha-652395) Waiting to get IP...
	I0802 17:43:29.305021   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:29.305389   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:29.305417   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:29.305371   23401 retry.go:31] will retry after 206.437797ms: waiting for machine to come up
	I0802 17:43:29.513790   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:29.514187   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:29.514209   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:29.514150   23401 retry.go:31] will retry after 317.949439ms: waiting for machine to come up
	I0802 17:43:29.833691   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:29.834084   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:29.834127   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:29.834036   23401 retry.go:31] will retry after 296.41332ms: waiting for machine to come up
	I0802 17:43:30.132447   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:30.132882   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:30.132909   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:30.132820   23401 retry.go:31] will retry after 578.802992ms: waiting for machine to come up
	I0802 17:43:30.713751   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:30.714194   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:30.714225   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:30.714143   23401 retry.go:31] will retry after 541.137947ms: waiting for machine to come up
	I0802 17:43:31.256734   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:31.257148   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:31.257166   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:31.257112   23401 retry.go:31] will retry after 868.454467ms: waiting for machine to come up
	I0802 17:43:32.127061   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:32.127448   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:32.127479   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:32.127407   23401 retry.go:31] will retry after 957.120594ms: waiting for machine to come up
	I0802 17:43:33.086307   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:33.086703   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:33.086732   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:33.086632   23401 retry.go:31] will retry after 950.640972ms: waiting for machine to come up
	I0802 17:43:34.038690   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:34.039181   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:34.039204   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:34.039131   23401 retry.go:31] will retry after 1.174050877s: waiting for machine to come up
	I0802 17:43:35.215420   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:35.215962   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:35.215990   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:35.215910   23401 retry.go:31] will retry after 2.321948842s: waiting for machine to come up
	I0802 17:43:37.540307   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:37.540802   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:37.540830   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:37.540758   23401 retry.go:31] will retry after 2.138795762s: waiting for machine to come up
	I0802 17:43:39.682424   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:39.682734   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:39.682756   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:39.682704   23401 retry.go:31] will retry after 3.350234739s: waiting for machine to come up
	I0802 17:43:43.034379   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:43.034761   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:43.034786   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:43.034714   23401 retry.go:31] will retry after 4.438592489s: waiting for machine to come up
	I0802 17:43:47.476154   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.476553   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has current primary IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.476575   23378 main.go:141] libmachine: (ha-652395) Found IP for machine: 192.168.39.210
	I0802 17:43:47.476586   23378 main.go:141] libmachine: (ha-652395) Reserving static IP address...
	I0802 17:43:47.476910   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find host DHCP lease matching {name: "ha-652395", mac: "52:54:00:ae:3a:9a", ip: "192.168.39.210"} in network mk-ha-652395
	I0802 17:43:47.546729   23378 main.go:141] libmachine: (ha-652395) DBG | Getting to WaitForSSH function...
	I0802 17:43:47.546784   23378 main.go:141] libmachine: (ha-652395) Reserved static IP address: 192.168.39.210
	I0802 17:43:47.546800   23378 main.go:141] libmachine: (ha-652395) Waiting for SSH to be available...
	I0802 17:43:47.549024   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.549350   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:47.549394   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.549467   23378 main.go:141] libmachine: (ha-652395) DBG | Using SSH client type: external
	I0802 17:43:47.549509   23378 main.go:141] libmachine: (ha-652395) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa (-rw-------)
	I0802 17:43:47.549536   23378 main.go:141] libmachine: (ha-652395) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 17:43:47.549558   23378 main.go:141] libmachine: (ha-652395) DBG | About to run SSH command:
	I0802 17:43:47.549572   23378 main.go:141] libmachine: (ha-652395) DBG | exit 0
	I0802 17:43:47.674982   23378 main.go:141] libmachine: (ha-652395) DBG | SSH cmd err, output: <nil>: 
	I0802 17:43:47.675260   23378 main.go:141] libmachine: (ha-652395) KVM machine creation complete!
	I0802 17:43:47.675619   23378 main.go:141] libmachine: (ha-652395) Calling .GetConfigRaw
	I0802 17:43:47.676203   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:47.676379   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:47.676547   23378 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 17:43:47.676564   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:43:47.677795   23378 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 17:43:47.677810   23378 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 17:43:47.677818   23378 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 17:43:47.677827   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:47.680082   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.680411   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:47.680437   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.680572   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:47.680735   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.680838   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.680931   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:47.681070   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:47.681318   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:47.681334   23378 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 17:43:47.786185   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:43:47.786206   23378 main.go:141] libmachine: Detecting the provisioner...
	I0802 17:43:47.786214   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:47.788979   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.789319   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:47.789345   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.789463   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:47.789645   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.789796   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.789900   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:47.790055   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:47.790274   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:47.790290   23378 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 17:43:47.895389   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 17:43:47.895462   23378 main.go:141] libmachine: found compatible host: buildroot
	I0802 17:43:47.895472   23378 main.go:141] libmachine: Provisioning with buildroot...
	I0802 17:43:47.895483   23378 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:43:47.895777   23378 buildroot.go:166] provisioning hostname "ha-652395"
	I0802 17:43:47.895801   23378 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:43:47.895976   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:47.898234   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.898534   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:47.898558   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.898698   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:47.898911   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.899028   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.899189   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:47.899346   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:47.899518   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:47.899530   23378 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-652395 && echo "ha-652395" | sudo tee /etc/hostname
	I0802 17:43:48.016012   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-652395
	
	I0802 17:43:48.016041   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.018712   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.019181   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.019211   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.019353   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.019529   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.019681   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.019837   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.020018   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:48.020223   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:48.020241   23378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-652395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-652395/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-652395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 17:43:48.135041   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:43:48.135070   23378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 17:43:48.135126   23378 buildroot.go:174] setting up certificates
	I0802 17:43:48.135139   23378 provision.go:84] configureAuth start
	I0802 17:43:48.135150   23378 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:43:48.135417   23378 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:43:48.138137   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.138480   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.138512   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.138649   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.140762   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.141045   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.141069   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.141206   23378 provision.go:143] copyHostCerts
	I0802 17:43:48.141236   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:43:48.141275   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 17:43:48.141284   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:43:48.141346   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 17:43:48.141429   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:43:48.141447   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 17:43:48.141462   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:43:48.141489   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 17:43:48.141531   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:43:48.141548   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 17:43:48.141554   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:43:48.141588   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 17:43:48.141634   23378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.ha-652395 san=[127.0.0.1 192.168.39.210 ha-652395 localhost minikube]
	I0802 17:43:48.239558   23378 provision.go:177] copyRemoteCerts
	I0802 17:43:48.239612   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 17:43:48.239635   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.242457   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.242774   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.242799   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.242926   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.243133   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.243299   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.243417   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:43:48.324685   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0802 17:43:48.324749   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 17:43:48.346222   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0802 17:43:48.346302   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 17:43:48.367321   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0802 17:43:48.367402   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0802 17:43:48.388695   23378 provision.go:87] duration metric: took 253.541137ms to configureAuth
	I0802 17:43:48.388723   23378 buildroot.go:189] setting minikube options for container-runtime
	I0802 17:43:48.388930   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:43:48.389017   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.391564   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.391885   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.391913   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.392056   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.392251   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.392433   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.392570   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.392709   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:48.392865   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:48.392883   23378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 17:43:48.645388   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 17:43:48.645420   23378 main.go:141] libmachine: Checking connection to Docker...
	I0802 17:43:48.645430   23378 main.go:141] libmachine: (ha-652395) Calling .GetURL
	I0802 17:43:48.646630   23378 main.go:141] libmachine: (ha-652395) DBG | Using libvirt version 6000000
	I0802 17:43:48.648475   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.648797   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.648817   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.649029   23378 main.go:141] libmachine: Docker is up and running!
	I0802 17:43:48.649041   23378 main.go:141] libmachine: Reticulating splines...
	I0802 17:43:48.649047   23378 client.go:171] duration metric: took 21.029683702s to LocalClient.Create
	I0802 17:43:48.649079   23378 start.go:167] duration metric: took 21.029733945s to libmachine.API.Create "ha-652395"
	I0802 17:43:48.649088   23378 start.go:293] postStartSetup for "ha-652395" (driver="kvm2")
	I0802 17:43:48.649097   23378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 17:43:48.649110   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:48.649321   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 17:43:48.649360   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.651633   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.651945   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.651969   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.652118   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.652345   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.652548   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.652713   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:43:48.733227   23378 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 17:43:48.736973   23378 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 17:43:48.736994   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 17:43:48.737050   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 17:43:48.737115   23378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 17:43:48.737128   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /etc/ssl/certs/125472.pem
	I0802 17:43:48.737210   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 17:43:48.746047   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:43:48.767307   23378 start.go:296] duration metric: took 118.189518ms for postStartSetup
	I0802 17:43:48.767349   23378 main.go:141] libmachine: (ha-652395) Calling .GetConfigRaw
	I0802 17:43:48.767931   23378 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:43:48.770145   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.770431   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.770470   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.770687   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:43:48.770851   23378 start.go:128] duration metric: took 21.168914849s to createHost
	I0802 17:43:48.770870   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.772913   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.773160   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.773190   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.773352   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.773510   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.773628   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.773838   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.773954   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:48.774126   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:48.774135   23378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 17:43:48.879555   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722620628.854031292
	
	I0802 17:43:48.879578   23378 fix.go:216] guest clock: 1722620628.854031292
	I0802 17:43:48.879588   23378 fix.go:229] Guest: 2024-08-02 17:43:48.854031292 +0000 UTC Remote: 2024-08-02 17:43:48.770861378 +0000 UTC m=+21.272573656 (delta=83.169914ms)
	I0802 17:43:48.879631   23378 fix.go:200] guest clock delta is within tolerance: 83.169914ms
	I0802 17:43:48.879638   23378 start.go:83] releasing machines lock for "ha-652395", held for 21.277774233s
	I0802 17:43:48.879658   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:48.879906   23378 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:43:48.882158   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.882466   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.882484   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.882693   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:48.883190   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:48.883352   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:48.883448   23378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 17:43:48.883480   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.883535   23378 ssh_runner.go:195] Run: cat /version.json
	I0802 17:43:48.883558   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.885979   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.886112   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.886327   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.886357   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.886453   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.886468   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.886489   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.886679   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.886695   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.886858   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.886863   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.887005   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.886996   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:43:48.887146   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:43:48.963670   23378 ssh_runner.go:195] Run: systemctl --version
	I0802 17:43:49.000362   23378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 17:43:49.153351   23378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 17:43:49.159630   23378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 17:43:49.159690   23378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 17:43:49.174393   23378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 17:43:49.174419   23378 start.go:495] detecting cgroup driver to use...
	I0802 17:43:49.174485   23378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 17:43:49.189549   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:43:49.202460   23378 docker.go:217] disabling cri-docker service (if available) ...
	I0802 17:43:49.202510   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 17:43:49.216121   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 17:43:49.229759   23378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 17:43:49.342217   23378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 17:43:49.477112   23378 docker.go:233] disabling docker service ...
	I0802 17:43:49.477177   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 17:43:49.490688   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 17:43:49.502398   23378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 17:43:49.638741   23378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 17:43:49.747840   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 17:43:49.760987   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:43:49.777504   23378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 17:43:49.777559   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.786762   23378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 17:43:49.786828   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.796125   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.805267   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.814132   23378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 17:43:49.823601   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.832591   23378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.847883   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.857095   23378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 17:43:49.865698   23378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 17:43:49.865769   23378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 17:43:49.877492   23378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 17:43:49.887087   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:43:49.990294   23378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 17:43:50.117171   23378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 17:43:50.117248   23378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 17:43:50.121957   23378 start.go:563] Will wait 60s for crictl version
	I0802 17:43:50.121992   23378 ssh_runner.go:195] Run: which crictl
	I0802 17:43:50.125194   23378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 17:43:50.161936   23378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 17:43:50.162018   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:43:50.188078   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:43:50.222165   23378 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 17:43:50.223314   23378 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:43:50.225669   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:50.225973   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:50.226014   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:50.226182   23378 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 17:43:50.230075   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:43:50.242064   23378 kubeadm.go:883] updating cluster {Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 17:43:50.242158   23378 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:43:50.242222   23378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:43:50.271773   23378 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 17:43:50.271828   23378 ssh_runner.go:195] Run: which lz4
	I0802 17:43:50.275129   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0802 17:43:50.275210   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 17:43:50.278906   23378 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 17:43:50.278938   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 17:43:51.475425   23378 crio.go:462] duration metric: took 1.200229686s to copy over tarball
	I0802 17:43:51.475504   23378 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 17:43:53.541418   23378 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.065866585s)
	I0802 17:43:53.541456   23378 crio.go:469] duration metric: took 2.065994563s to extract the tarball
	I0802 17:43:53.541466   23378 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 17:43:53.578000   23378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:43:53.619614   23378 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 17:43:53.619638   23378 cache_images.go:84] Images are preloaded, skipping loading
	I0802 17:43:53.619647   23378 kubeadm.go:934] updating node { 192.168.39.210 8443 v1.30.3 crio true true} ...
	I0802 17:43:53.619781   23378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-652395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 17:43:53.619863   23378 ssh_runner.go:195] Run: crio config
	I0802 17:43:53.667999   23378 cni.go:84] Creating CNI manager for ""
	I0802 17:43:53.668024   23378 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0802 17:43:53.668034   23378 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 17:43:53.668057   23378 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-652395 NodeName:ha-652395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 17:43:53.668221   23378 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-652395"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 17:43:53.668249   23378 kube-vip.go:115] generating kube-vip config ...
	I0802 17:43:53.668309   23378 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0802 17:43:53.683501   23378 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0802 17:43:53.683641   23378 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0802 17:43:53.683724   23378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 17:43:53.692904   23378 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 17:43:53.692974   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0802 17:43:53.701414   23378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0802 17:43:53.716312   23378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 17:43:53.730577   23378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0802 17:43:53.745247   23378 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0802 17:43:53.760126   23378 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0802 17:43:53.763517   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:43:53.774170   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:43:53.889085   23378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:43:53.905244   23378 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395 for IP: 192.168.39.210
	I0802 17:43:53.905264   23378 certs.go:194] generating shared ca certs ...
	I0802 17:43:53.905288   23378 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:53.905446   23378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 17:43:53.905482   23378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 17:43:53.905491   23378 certs.go:256] generating profile certs ...
	I0802 17:43:53.905539   23378 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key
	I0802 17:43:53.905552   23378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt with IP's: []
	I0802 17:43:54.053414   23378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt ...
	I0802 17:43:54.053445   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt: {Name:mk314022aeb5eeb0a845d5e8cd46286bc9907522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.053633   23378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key ...
	I0802 17:43:54.053646   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key: {Name:mk5b437e61241eb8c16ba4e9fbfd32eed2d1a7d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.053733   23378 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.1fd73d6c
	I0802 17:43:54.053750   23378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.1fd73d6c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.210 192.168.39.254]
	I0802 17:43:54.304477   23378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.1fd73d6c ...
	I0802 17:43:54.304511   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.1fd73d6c: {Name:mkcd4a89a2871e6bdf2fd9eb443ed97cb6069758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.304686   23378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.1fd73d6c ...
	I0802 17:43:54.304700   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.1fd73d6c: {Name:mkbaf0ce6457d1d137e82c654b0f103e2bb7dffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.304777   23378 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.1fd73d6c -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt
	I0802 17:43:54.304874   23378 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.1fd73d6c -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key
	I0802 17:43:54.304938   23378 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key
	I0802 17:43:54.304955   23378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt with IP's: []
	I0802 17:43:54.367003   23378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt ...
	I0802 17:43:54.367035   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt: {Name:mk9b147340d68f0948aa055cf8f58f42b1889b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.367225   23378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key ...
	I0802 17:43:54.367239   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key: {Name:mke9487a3c9b3a3f630f52ed701c26cf34a31157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.367320   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0802 17:43:54.367341   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0802 17:43:54.367355   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0802 17:43:54.367374   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0802 17:43:54.367389   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0802 17:43:54.367405   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0802 17:43:54.367420   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0802 17:43:54.367435   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0802 17:43:54.367492   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 17:43:54.367529   23378 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 17:43:54.367541   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 17:43:54.367567   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 17:43:54.367592   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 17:43:54.367616   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 17:43:54.367668   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:43:54.367698   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem -> /usr/share/ca-certificates/12547.pem
	I0802 17:43:54.367715   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /usr/share/ca-certificates/125472.pem
	I0802 17:43:54.367730   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:43:54.368234   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 17:43:54.392172   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 17:43:54.413471   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 17:43:54.434644   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 17:43:54.456366   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0802 17:43:54.478038   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 17:43:54.499093   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 17:43:54.520439   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 17:43:54.542074   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 17:43:54.563439   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 17:43:54.584546   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 17:43:54.606302   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 17:43:54.621748   23378 ssh_runner.go:195] Run: openssl version
	I0802 17:43:54.627337   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 17:43:54.637187   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 17:43:54.641155   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 17:43:54.641213   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 17:43:54.646744   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 17:43:54.656795   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 17:43:54.669041   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 17:43:54.673199   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 17:43:54.673270   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 17:43:54.686479   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 17:43:54.710437   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 17:43:54.721397   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:43:54.728696   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:43:54.728762   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:43:54.735208   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 17:43:54.745472   23378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 17:43:54.749220   23378 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 17:43:54.749277   23378 kubeadm.go:392] StartCluster: {Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:43:54.749343   23378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 17:43:54.749398   23378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 17:43:54.788152   23378 cri.go:89] found id: ""
	I0802 17:43:54.788236   23378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 17:43:54.797773   23378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 17:43:54.806467   23378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 17:43:54.815266   23378 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 17:43:54.815284   23378 kubeadm.go:157] found existing configuration files:
	
	I0802 17:43:54.815332   23378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 17:43:54.823631   23378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 17:43:54.823698   23378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 17:43:54.832510   23378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 17:43:54.841157   23378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 17:43:54.841221   23378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 17:43:54.850423   23378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 17:43:54.858886   23378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 17:43:54.858943   23378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 17:43:54.867858   23378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 17:43:54.876271   23378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 17:43:54.876333   23378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 17:43:54.885019   23378 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 17:43:54.980098   23378 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 17:43:54.980156   23378 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 17:43:55.093176   23378 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 17:43:55.093342   23378 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 17:43:55.093476   23378 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 17:43:55.275755   23378 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 17:43:55.278933   23378 out.go:204]   - Generating certificates and keys ...
	I0802 17:43:55.279155   23378 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 17:43:55.279710   23378 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 17:43:55.405849   23378 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 17:43:55.560710   23378 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 17:43:55.626835   23378 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 17:43:55.710955   23378 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 17:43:55.808965   23378 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 17:43:55.809202   23378 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-652395 localhost] and IPs [192.168.39.210 127.0.0.1 ::1]
	I0802 17:43:56.078095   23378 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 17:43:56.078366   23378 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-652395 localhost] and IPs [192.168.39.210 127.0.0.1 ::1]
	I0802 17:43:56.234541   23378 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 17:43:56.413241   23378 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 17:43:56.651554   23378 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 17:43:56.651770   23378 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 17:43:56.760727   23378 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 17:43:56.809425   23378 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 17:43:57.166254   23378 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 17:43:57.345558   23378 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 17:43:57.523845   23378 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 17:43:57.524396   23378 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 17:43:57.527324   23378 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 17:43:57.529067   23378 out.go:204]   - Booting up control plane ...
	I0802 17:43:57.529164   23378 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 17:43:57.529258   23378 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 17:43:57.529451   23378 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 17:43:57.546637   23378 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 17:43:57.547543   23378 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 17:43:57.547585   23378 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 17:43:57.681126   23378 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 17:43:57.681231   23378 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 17:43:58.182680   23378 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.628438ms
	I0802 17:43:58.182774   23378 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 17:44:04.086599   23378 kubeadm.go:310] [api-check] The API server is healthy after 5.9078985s
	I0802 17:44:04.099166   23378 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 17:44:04.113927   23378 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 17:44:04.141168   23378 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 17:44:04.141377   23378 kubeadm.go:310] [mark-control-plane] Marking the node ha-652395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 17:44:04.158760   23378 kubeadm.go:310] [bootstrap-token] Using token: gh7ckt.nhzg9mtgbeyyrv9u
	I0802 17:44:04.160217   23378 out.go:204]   - Configuring RBAC rules ...
	I0802 17:44:04.160374   23378 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 17:44:04.164771   23378 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 17:44:04.180573   23378 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 17:44:04.184329   23378 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 17:44:04.188291   23378 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 17:44:04.193124   23378 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 17:44:04.493327   23378 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 17:44:04.936050   23378 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 17:44:05.494536   23378 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 17:44:05.494565   23378 kubeadm.go:310] 
	I0802 17:44:05.494644   23378 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 17:44:05.494651   23378 kubeadm.go:310] 
	I0802 17:44:05.494752   23378 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 17:44:05.494762   23378 kubeadm.go:310] 
	I0802 17:44:05.494817   23378 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 17:44:05.494899   23378 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 17:44:05.494967   23378 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 17:44:05.494976   23378 kubeadm.go:310] 
	I0802 17:44:05.495049   23378 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 17:44:05.495059   23378 kubeadm.go:310] 
	I0802 17:44:05.495137   23378 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 17:44:05.495147   23378 kubeadm.go:310] 
	I0802 17:44:05.495217   23378 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 17:44:05.495341   23378 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 17:44:05.495413   23378 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 17:44:05.495421   23378 kubeadm.go:310] 
	I0802 17:44:05.495493   23378 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 17:44:05.495562   23378 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 17:44:05.495568   23378 kubeadm.go:310] 
	I0802 17:44:05.495635   23378 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gh7ckt.nhzg9mtgbeyyrv9u \
	I0802 17:44:05.495724   23378 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 17:44:05.495747   23378 kubeadm.go:310] 	--control-plane 
	I0802 17:44:05.495753   23378 kubeadm.go:310] 
	I0802 17:44:05.495822   23378 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 17:44:05.495829   23378 kubeadm.go:310] 
	I0802 17:44:05.495894   23378 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gh7ckt.nhzg9mtgbeyyrv9u \
	I0802 17:44:05.496028   23378 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 17:44:05.496466   23378 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 17:44:05.496493   23378 cni.go:84] Creating CNI manager for ""
	I0802 17:44:05.496503   23378 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0802 17:44:05.498369   23378 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0802 17:44:05.499806   23378 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0802 17:44:05.505146   23378 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0802 17:44:05.505164   23378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0802 17:44:05.523570   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0802 17:44:05.957867   23378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 17:44:05.958009   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:05.958020   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-652395 minikube.k8s.io/updated_at=2024_08_02T17_44_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=ha-652395 minikube.k8s.io/primary=true
	I0802 17:44:06.021205   23378 ops.go:34] apiserver oom_adj: -16
	I0802 17:44:06.127885   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:06.628760   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:07.128391   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:07.627986   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:08.128127   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:08.627976   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:09.128814   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:09.628646   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:10.128361   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:10.628952   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:11.128198   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:11.628805   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:12.128917   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:12.628721   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:13.128762   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:13.628830   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:14.128121   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:14.628558   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:15.128510   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:15.628063   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:16.128965   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:16.628500   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:17.128859   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:17.250527   23378 kubeadm.go:1113] duration metric: took 11.292568167s to wait for elevateKubeSystemPrivileges
	I0802 17:44:17.250570   23378 kubeadm.go:394] duration metric: took 22.501297226s to StartCluster
	I0802 17:44:17.250594   23378 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:44:17.250681   23378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:44:17.251618   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:44:17.251841   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 17:44:17.251852   23378 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:44:17.251877   23378 start.go:241] waiting for startup goroutines ...
	I0802 17:44:17.251889   23378 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 17:44:17.251950   23378 addons.go:69] Setting storage-provisioner=true in profile "ha-652395"
	I0802 17:44:17.251958   23378 addons.go:69] Setting default-storageclass=true in profile "ha-652395"
	I0802 17:44:17.251978   23378 addons.go:234] Setting addon storage-provisioner=true in "ha-652395"
	I0802 17:44:17.251996   23378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-652395"
	I0802 17:44:17.252008   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:44:17.252115   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:44:17.252448   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:17.252481   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:17.252448   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:17.252601   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:17.267843   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0802 17:44:17.267846   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43499
	I0802 17:44:17.268349   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:17.268399   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:17.268908   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:17.268927   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:17.268911   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:17.268991   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:17.269276   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:17.269321   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:17.269538   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:44:17.269836   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:17.269871   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:17.271906   23378 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:44:17.272284   23378 kapi.go:59] client config for ha-652395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key", CAFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 17:44:17.272869   23378 cert_rotation.go:137] Starting client certificate rotation controller
	I0802 17:44:17.273070   23378 addons.go:234] Setting addon default-storageclass=true in "ha-652395"
	I0802 17:44:17.273113   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:44:17.273513   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:17.273543   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:17.285737   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I0802 17:44:17.286210   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:17.286750   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:17.286776   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:17.287074   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:17.287266   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:44:17.287801   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0802 17:44:17.288165   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:17.288636   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:17.288703   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:17.288960   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:44:17.289205   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:17.289845   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:17.289869   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:17.291346   23378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 17:44:17.292671   23378 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 17:44:17.292703   23378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 17:44:17.292726   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:44:17.296021   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:17.296529   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:44:17.296603   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:17.296888   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:44:17.297058   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:44:17.297225   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:44:17.297386   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:44:17.305530   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0802 17:44:17.305939   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:17.306438   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:17.306456   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:17.306783   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:17.307000   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:44:17.308614   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:44:17.308824   23378 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 17:44:17.308842   23378 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 17:44:17.308861   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:44:17.311528   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:17.312037   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:44:17.312073   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:17.312264   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:44:17.312431   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:44:17.312605   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:44:17.312738   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:44:17.367506   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 17:44:17.456500   23378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 17:44:17.469431   23378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 17:44:17.852598   23378 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0802 17:44:18.117178   23378 main.go:141] libmachine: Making call to close driver server
	I0802 17:44:18.117200   23378 main.go:141] libmachine: (ha-652395) Calling .Close
	I0802 17:44:18.117240   23378 main.go:141] libmachine: Making call to close driver server
	I0802 17:44:18.117260   23378 main.go:141] libmachine: (ha-652395) Calling .Close
	I0802 17:44:18.117480   23378 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:44:18.117497   23378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:44:18.117507   23378 main.go:141] libmachine: Making call to close driver server
	I0802 17:44:18.117517   23378 main.go:141] libmachine: (ha-652395) Calling .Close
	I0802 17:44:18.117525   23378 main.go:141] libmachine: (ha-652395) DBG | Closing plugin on server side
	I0802 17:44:18.117483   23378 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:44:18.117549   23378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:44:18.117561   23378 main.go:141] libmachine: Making call to close driver server
	I0802 17:44:18.117569   23378 main.go:141] libmachine: (ha-652395) Calling .Close
	I0802 17:44:18.119196   23378 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:44:18.119213   23378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:44:18.119219   23378 main.go:141] libmachine: (ha-652395) DBG | Closing plugin on server side
	I0802 17:44:18.119245   23378 main.go:141] libmachine: (ha-652395) DBG | Closing plugin on server side
	I0802 17:44:18.119260   23378 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:44:18.119283   23378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:44:18.119334   23378 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0802 17:44:18.119345   23378 round_trippers.go:469] Request Headers:
	I0802 17:44:18.119354   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:44:18.119362   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:44:18.129277   23378 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0802 17:44:18.129874   23378 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0802 17:44:18.129891   23378 round_trippers.go:469] Request Headers:
	I0802 17:44:18.129902   23378 round_trippers.go:473]     Content-Type: application/json
	I0802 17:44:18.129907   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:44:18.129911   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:44:18.132586   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:44:18.132811   23378 main.go:141] libmachine: Making call to close driver server
	I0802 17:44:18.132838   23378 main.go:141] libmachine: (ha-652395) Calling .Close
	I0802 17:44:18.133081   23378 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:44:18.133095   23378 main.go:141] libmachine: (ha-652395) DBG | Closing plugin on server side
	I0802 17:44:18.133105   23378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:44:18.134947   23378 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0802 17:44:18.136166   23378 addons.go:510] duration metric: took 884.272175ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0802 17:44:18.136208   23378 start.go:246] waiting for cluster config update ...
	I0802 17:44:18.136222   23378 start.go:255] writing updated cluster config ...
	I0802 17:44:18.137724   23378 out.go:177] 
	I0802 17:44:18.139128   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:44:18.139205   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:44:18.140885   23378 out.go:177] * Starting "ha-652395-m02" control-plane node in "ha-652395" cluster
	I0802 17:44:18.142148   23378 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:44:18.142186   23378 cache.go:56] Caching tarball of preloaded images
	I0802 17:44:18.142277   23378 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 17:44:18.142319   23378 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 17:44:18.142418   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:44:18.142659   23378 start.go:360] acquireMachinesLock for ha-652395-m02: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 17:44:18.142710   23378 start.go:364] duration metric: took 29.12µs to acquireMachinesLock for "ha-652395-m02"
	I0802 17:44:18.142726   23378 start.go:93] Provisioning new machine with config: &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:44:18.142841   23378 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0802 17:44:18.145485   23378 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 17:44:18.145595   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:18.145631   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:18.159958   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37863
	I0802 17:44:18.160419   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:18.160899   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:18.160921   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:18.161237   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:18.161412   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetMachineName
	I0802 17:44:18.161552   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:18.161698   23378 start.go:159] libmachine.API.Create for "ha-652395" (driver="kvm2")
	I0802 17:44:18.161730   23378 client.go:168] LocalClient.Create starting
	I0802 17:44:18.161758   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 17:44:18.161786   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:44:18.161800   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:44:18.161846   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 17:44:18.161864   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:44:18.161885   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:44:18.161900   23378 main.go:141] libmachine: Running pre-create checks...
	I0802 17:44:18.161908   23378 main.go:141] libmachine: (ha-652395-m02) Calling .PreCreateCheck
	I0802 17:44:18.162043   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetConfigRaw
	I0802 17:44:18.162490   23378 main.go:141] libmachine: Creating machine...
	I0802 17:44:18.162503   23378 main.go:141] libmachine: (ha-652395-m02) Calling .Create
	I0802 17:44:18.162663   23378 main.go:141] libmachine: (ha-652395-m02) Creating KVM machine...
	I0802 17:44:18.163863   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found existing default KVM network
	I0802 17:44:18.164019   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found existing private KVM network mk-ha-652395
	I0802 17:44:18.164159   23378 main.go:141] libmachine: (ha-652395-m02) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02 ...
	I0802 17:44:18.164181   23378 main.go:141] libmachine: (ha-652395-m02) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 17:44:18.164260   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:18.164157   23763 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:44:18.164384   23378 main.go:141] libmachine: (ha-652395-m02) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 17:44:18.390136   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:18.390008   23763 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa...
	I0802 17:44:18.528332   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:18.528175   23763 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/ha-652395-m02.rawdisk...
	I0802 17:44:18.528368   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Writing magic tar header
	I0802 17:44:18.528425   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Writing SSH key tar header
	I0802 17:44:18.528456   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:18.528319   23763 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02 ...
	I0802 17:44:18.528473   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02
	I0802 17:44:18.528482   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 17:44:18.528500   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02 (perms=drwx------)
	I0802 17:44:18.528510   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 17:44:18.528517   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:44:18.528526   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 17:44:18.528535   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 17:44:18.528542   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 17:44:18.528552   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 17:44:18.528561   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 17:44:18.528569   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 17:44:18.528575   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins
	I0802 17:44:18.528585   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home
	I0802 17:44:18.528592   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Skipping /home - not owner
	I0802 17:44:18.528620   23378 main.go:141] libmachine: (ha-652395-m02) Creating domain...
	I0802 17:44:18.529489   23378 main.go:141] libmachine: (ha-652395-m02) define libvirt domain using xml: 
	I0802 17:44:18.529509   23378 main.go:141] libmachine: (ha-652395-m02) <domain type='kvm'>
	I0802 17:44:18.529520   23378 main.go:141] libmachine: (ha-652395-m02)   <name>ha-652395-m02</name>
	I0802 17:44:18.529526   23378 main.go:141] libmachine: (ha-652395-m02)   <memory unit='MiB'>2200</memory>
	I0802 17:44:18.529534   23378 main.go:141] libmachine: (ha-652395-m02)   <vcpu>2</vcpu>
	I0802 17:44:18.529541   23378 main.go:141] libmachine: (ha-652395-m02)   <features>
	I0802 17:44:18.529549   23378 main.go:141] libmachine: (ha-652395-m02)     <acpi/>
	I0802 17:44:18.529564   23378 main.go:141] libmachine: (ha-652395-m02)     <apic/>
	I0802 17:44:18.529572   23378 main.go:141] libmachine: (ha-652395-m02)     <pae/>
	I0802 17:44:18.529580   23378 main.go:141] libmachine: (ha-652395-m02)     
	I0802 17:44:18.529592   23378 main.go:141] libmachine: (ha-652395-m02)   </features>
	I0802 17:44:18.529606   23378 main.go:141] libmachine: (ha-652395-m02)   <cpu mode='host-passthrough'>
	I0802 17:44:18.529617   23378 main.go:141] libmachine: (ha-652395-m02)   
	I0802 17:44:18.529625   23378 main.go:141] libmachine: (ha-652395-m02)   </cpu>
	I0802 17:44:18.529632   23378 main.go:141] libmachine: (ha-652395-m02)   <os>
	I0802 17:44:18.529640   23378 main.go:141] libmachine: (ha-652395-m02)     <type>hvm</type>
	I0802 17:44:18.529648   23378 main.go:141] libmachine: (ha-652395-m02)     <boot dev='cdrom'/>
	I0802 17:44:18.529658   23378 main.go:141] libmachine: (ha-652395-m02)     <boot dev='hd'/>
	I0802 17:44:18.529668   23378 main.go:141] libmachine: (ha-652395-m02)     <bootmenu enable='no'/>
	I0802 17:44:18.529681   23378 main.go:141] libmachine: (ha-652395-m02)   </os>
	I0802 17:44:18.529691   23378 main.go:141] libmachine: (ha-652395-m02)   <devices>
	I0802 17:44:18.529703   23378 main.go:141] libmachine: (ha-652395-m02)     <disk type='file' device='cdrom'>
	I0802 17:44:18.529719   23378 main.go:141] libmachine: (ha-652395-m02)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/boot2docker.iso'/>
	I0802 17:44:18.529730   23378 main.go:141] libmachine: (ha-652395-m02)       <target dev='hdc' bus='scsi'/>
	I0802 17:44:18.529737   23378 main.go:141] libmachine: (ha-652395-m02)       <readonly/>
	I0802 17:44:18.529747   23378 main.go:141] libmachine: (ha-652395-m02)     </disk>
	I0802 17:44:18.529768   23378 main.go:141] libmachine: (ha-652395-m02)     <disk type='file' device='disk'>
	I0802 17:44:18.529793   23378 main.go:141] libmachine: (ha-652395-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 17:44:18.529809   23378 main.go:141] libmachine: (ha-652395-m02)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/ha-652395-m02.rawdisk'/>
	I0802 17:44:18.529821   23378 main.go:141] libmachine: (ha-652395-m02)       <target dev='hda' bus='virtio'/>
	I0802 17:44:18.529830   23378 main.go:141] libmachine: (ha-652395-m02)     </disk>
	I0802 17:44:18.529835   23378 main.go:141] libmachine: (ha-652395-m02)     <interface type='network'>
	I0802 17:44:18.529841   23378 main.go:141] libmachine: (ha-652395-m02)       <source network='mk-ha-652395'/>
	I0802 17:44:18.529848   23378 main.go:141] libmachine: (ha-652395-m02)       <model type='virtio'/>
	I0802 17:44:18.529854   23378 main.go:141] libmachine: (ha-652395-m02)     </interface>
	I0802 17:44:18.529868   23378 main.go:141] libmachine: (ha-652395-m02)     <interface type='network'>
	I0802 17:44:18.529881   23378 main.go:141] libmachine: (ha-652395-m02)       <source network='default'/>
	I0802 17:44:18.529892   23378 main.go:141] libmachine: (ha-652395-m02)       <model type='virtio'/>
	I0802 17:44:18.529903   23378 main.go:141] libmachine: (ha-652395-m02)     </interface>
	I0802 17:44:18.529909   23378 main.go:141] libmachine: (ha-652395-m02)     <serial type='pty'>
	I0802 17:44:18.529915   23378 main.go:141] libmachine: (ha-652395-m02)       <target port='0'/>
	I0802 17:44:18.529921   23378 main.go:141] libmachine: (ha-652395-m02)     </serial>
	I0802 17:44:18.529929   23378 main.go:141] libmachine: (ha-652395-m02)     <console type='pty'>
	I0802 17:44:18.529940   23378 main.go:141] libmachine: (ha-652395-m02)       <target type='serial' port='0'/>
	I0802 17:44:18.529954   23378 main.go:141] libmachine: (ha-652395-m02)     </console>
	I0802 17:44:18.529968   23378 main.go:141] libmachine: (ha-652395-m02)     <rng model='virtio'>
	I0802 17:44:18.529979   23378 main.go:141] libmachine: (ha-652395-m02)       <backend model='random'>/dev/random</backend>
	I0802 17:44:18.529989   23378 main.go:141] libmachine: (ha-652395-m02)     </rng>
	I0802 17:44:18.529998   23378 main.go:141] libmachine: (ha-652395-m02)     
	I0802 17:44:18.530003   23378 main.go:141] libmachine: (ha-652395-m02)     
	I0802 17:44:18.530008   23378 main.go:141] libmachine: (ha-652395-m02)   </devices>
	I0802 17:44:18.530014   23378 main.go:141] libmachine: (ha-652395-m02) </domain>
	I0802 17:44:18.530021   23378 main.go:141] libmachine: (ha-652395-m02) 
	I0802 17:44:18.536563   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:02:98:f3 in network default
	I0802 17:44:18.537135   23378 main.go:141] libmachine: (ha-652395-m02) Ensuring networks are active...
	I0802 17:44:18.537153   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:18.537838   23378 main.go:141] libmachine: (ha-652395-m02) Ensuring network default is active
	I0802 17:44:18.538245   23378 main.go:141] libmachine: (ha-652395-m02) Ensuring network mk-ha-652395 is active
	I0802 17:44:18.538616   23378 main.go:141] libmachine: (ha-652395-m02) Getting domain xml...
	I0802 17:44:18.539291   23378 main.go:141] libmachine: (ha-652395-m02) Creating domain...
	I0802 17:44:19.736873   23378 main.go:141] libmachine: (ha-652395-m02) Waiting to get IP...
	I0802 17:44:19.737634   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:19.738084   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:19.738126   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:19.738068   23763 retry.go:31] will retry after 217.948043ms: waiting for machine to come up
	I0802 17:44:19.958844   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:19.959262   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:19.959291   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:19.959230   23763 retry.go:31] will retry after 326.796973ms: waiting for machine to come up
	I0802 17:44:20.287452   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:20.287947   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:20.287982   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:20.287894   23763 retry.go:31] will retry after 376.716008ms: waiting for machine to come up
	I0802 17:44:20.666405   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:20.666943   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:20.666973   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:20.666909   23763 retry.go:31] will retry after 564.174398ms: waiting for machine to come up
	I0802 17:44:21.232225   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:21.232677   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:21.232706   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:21.232640   23763 retry.go:31] will retry after 733.655034ms: waiting for machine to come up
	I0802 17:44:21.967411   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:21.967809   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:21.967830   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:21.967776   23763 retry.go:31] will retry after 665.784935ms: waiting for machine to come up
	I0802 17:44:22.634995   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:22.635613   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:22.635642   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:22.635569   23763 retry.go:31] will retry after 790.339868ms: waiting for machine to come up
	I0802 17:44:23.427950   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:23.428503   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:23.428530   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:23.428467   23763 retry.go:31] will retry after 968.769963ms: waiting for machine to come up
	I0802 17:44:24.398711   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:24.399081   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:24.399115   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:24.399042   23763 retry.go:31] will retry after 1.755457058s: waiting for machine to come up
	I0802 17:44:26.156831   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:26.157231   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:26.157260   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:26.157187   23763 retry.go:31] will retry after 2.231533101s: waiting for machine to come up
	I0802 17:44:28.390743   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:28.391237   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:28.391259   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:28.391157   23763 retry.go:31] will retry after 2.175447005s: waiting for machine to come up
	I0802 17:44:30.569368   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:30.569868   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:30.569898   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:30.569816   23763 retry.go:31] will retry after 3.609031806s: waiting for machine to come up
	I0802 17:44:34.179928   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:34.180339   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:34.180364   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:34.180295   23763 retry.go:31] will retry after 3.725193463s: waiting for machine to come up
	I0802 17:44:37.908271   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:37.908731   23378 main.go:141] libmachine: (ha-652395-m02) Found IP for machine: 192.168.39.220
	I0802 17:44:37.908756   23378 main.go:141] libmachine: (ha-652395-m02) Reserving static IP address...
	I0802 17:44:37.908765   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has current primary IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:37.909129   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find host DHCP lease matching {name: "ha-652395-m02", mac: "52:54:00:da:d8:1e", ip: "192.168.39.220"} in network mk-ha-652395
	I0802 17:44:37.981456   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Getting to WaitForSSH function...
	I0802 17:44:37.981491   23378 main.go:141] libmachine: (ha-652395-m02) Reserved static IP address: 192.168.39.220
	I0802 17:44:37.981507   23378 main.go:141] libmachine: (ha-652395-m02) Waiting for SSH to be available...
	I0802 17:44:37.984054   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:37.984437   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:37.984466   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:37.984606   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Using SSH client type: external
	I0802 17:44:37.984626   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa (-rw-------)
	I0802 17:44:37.984693   23378 main.go:141] libmachine: (ha-652395-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 17:44:37.984731   23378 main.go:141] libmachine: (ha-652395-m02) DBG | About to run SSH command:
	I0802 17:44:37.984750   23378 main.go:141] libmachine: (ha-652395-m02) DBG | exit 0
	I0802 17:44:38.107117   23378 main.go:141] libmachine: (ha-652395-m02) DBG | SSH cmd err, output: <nil>: 
	I0802 17:44:38.107368   23378 main.go:141] libmachine: (ha-652395-m02) KVM machine creation complete!
	I0802 17:44:38.107781   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetConfigRaw
	I0802 17:44:38.108358   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:38.108554   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:38.108723   23378 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 17:44:38.108741   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 17:44:38.109932   23378 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 17:44:38.109943   23378 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 17:44:38.109949   23378 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 17:44:38.109955   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.112060   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.112416   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.112445   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.112597   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.112786   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.112943   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.113070   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.113224   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:38.113437   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:38.113451   23378 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 17:44:38.214068   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:44:38.214118   23378 main.go:141] libmachine: Detecting the provisioner...
	I0802 17:44:38.214130   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.217033   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.217446   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.217469   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.217716   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.217933   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.218065   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.218187   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.218324   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:38.218495   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:38.218508   23378 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 17:44:38.319349   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 17:44:38.319439   23378 main.go:141] libmachine: found compatible host: buildroot
	I0802 17:44:38.319451   23378 main.go:141] libmachine: Provisioning with buildroot...
	I0802 17:44:38.319459   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetMachineName
	I0802 17:44:38.319784   23378 buildroot.go:166] provisioning hostname "ha-652395-m02"
	I0802 17:44:38.319806   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetMachineName
	I0802 17:44:38.319988   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.322329   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.322663   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.322698   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.322835   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.323023   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.323189   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.323360   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.323519   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:38.323701   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:38.323714   23378 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-652395-m02 && echo "ha-652395-m02" | sudo tee /etc/hostname
	I0802 17:44:38.436740   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-652395-m02
	
	I0802 17:44:38.436767   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.439340   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.439683   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.439704   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.439915   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.440058   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.440228   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.440357   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.440518   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:38.440679   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:38.440694   23378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-652395-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-652395-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-652395-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 17:44:38.551741   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:44:38.551770   23378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 17:44:38.551788   23378 buildroot.go:174] setting up certificates
	I0802 17:44:38.551800   23378 provision.go:84] configureAuth start
	I0802 17:44:38.551808   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetMachineName
	I0802 17:44:38.552063   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:44:38.554962   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.555316   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.555342   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.555517   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.557789   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.558146   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.558176   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.558317   23378 provision.go:143] copyHostCerts
	I0802 17:44:38.558347   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:44:38.558374   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 17:44:38.558383   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:44:38.558449   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 17:44:38.558516   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:44:38.558532   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 17:44:38.558539   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:44:38.558562   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 17:44:38.558604   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:44:38.558620   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 17:44:38.558625   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:44:38.558645   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 17:44:38.558693   23378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.ha-652395-m02 san=[127.0.0.1 192.168.39.220 ha-652395-m02 localhost minikube]
	I0802 17:44:38.671752   23378 provision.go:177] copyRemoteCerts
	I0802 17:44:38.671807   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 17:44:38.671831   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.674377   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.674746   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.674776   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.674955   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.675166   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.675320   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.675457   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	I0802 17:44:38.757096   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0802 17:44:38.757200   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 17:44:38.779767   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0802 17:44:38.779830   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0802 17:44:38.801698   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0802 17:44:38.801769   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 17:44:38.822909   23378 provision.go:87] duration metric: took 271.098404ms to configureAuth
	I0802 17:44:38.822936   23378 buildroot.go:189] setting minikube options for container-runtime
	I0802 17:44:38.823161   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:44:38.823242   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.825732   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.826166   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.826202   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.826372   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.826581   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.826796   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.826908   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.827087   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:38.827297   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:38.827312   23378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 17:44:39.088891   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 17:44:39.088922   23378 main.go:141] libmachine: Checking connection to Docker...
	I0802 17:44:39.088933   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetURL
	I0802 17:44:39.090300   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Using libvirt version 6000000
	I0802 17:44:39.092491   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.092929   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.092956   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.093127   23378 main.go:141] libmachine: Docker is up and running!
	I0802 17:44:39.093142   23378 main.go:141] libmachine: Reticulating splines...
	I0802 17:44:39.093148   23378 client.go:171] duration metric: took 20.931409084s to LocalClient.Create
	I0802 17:44:39.093170   23378 start.go:167] duration metric: took 20.931472826s to libmachine.API.Create "ha-652395"
	I0802 17:44:39.093182   23378 start.go:293] postStartSetup for "ha-652395-m02" (driver="kvm2")
	I0802 17:44:39.093203   23378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 17:44:39.093232   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:39.093466   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 17:44:39.093502   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:39.095643   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.095927   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.095966   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.096065   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:39.096227   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:39.096422   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:39.096584   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	I0802 17:44:39.176707   23378 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 17:44:39.180614   23378 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 17:44:39.180641   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 17:44:39.180712   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 17:44:39.180804   23378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 17:44:39.180816   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /etc/ssl/certs/125472.pem
	I0802 17:44:39.180927   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 17:44:39.189414   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:44:39.212745   23378 start.go:296] duration metric: took 119.54014ms for postStartSetup
	I0802 17:44:39.212798   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetConfigRaw
	I0802 17:44:39.213390   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:44:39.215996   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.216331   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.216353   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.216579   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:44:39.216783   23378 start.go:128] duration metric: took 21.073923256s to createHost
	I0802 17:44:39.216813   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:39.218819   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.219124   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.219150   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.219276   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:39.219450   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:39.219614   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:39.219728   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:39.219909   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:39.220059   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:39.220069   23378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 17:44:39.319869   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722620679.291917014
	
	I0802 17:44:39.319888   23378 fix.go:216] guest clock: 1722620679.291917014
	I0802 17:44:39.319895   23378 fix.go:229] Guest: 2024-08-02 17:44:39.291917014 +0000 UTC Remote: 2024-08-02 17:44:39.216799126 +0000 UTC m=+71.718511413 (delta=75.117888ms)
	I0802 17:44:39.319910   23378 fix.go:200] guest clock delta is within tolerance: 75.117888ms
	I0802 17:44:39.319915   23378 start.go:83] releasing machines lock for "ha-652395-m02", held for 21.17719812s
	I0802 17:44:39.319936   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:39.320212   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:44:39.323026   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.323417   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.323439   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.325474   23378 out.go:177] * Found network options:
	I0802 17:44:39.326716   23378 out.go:177]   - NO_PROXY=192.168.39.210
	W0802 17:44:39.327787   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	I0802 17:44:39.327816   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:39.328312   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:39.328501   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:39.328594   23378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 17:44:39.328634   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	W0802 17:44:39.328708   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	I0802 17:44:39.328783   23378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 17:44:39.328802   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:39.331373   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.331699   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.331786   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.331814   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.332009   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:39.332135   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.332156   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.332157   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:39.332313   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:39.332325   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:39.332476   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:39.332486   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	I0802 17:44:39.332592   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:39.332754   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	I0802 17:44:39.572919   23378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 17:44:39.578198   23378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 17:44:39.578263   23378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 17:44:39.593447   23378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 17:44:39.593468   23378 start.go:495] detecting cgroup driver to use...
	I0802 17:44:39.593521   23378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 17:44:39.608957   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:44:39.623784   23378 docker.go:217] disabling cri-docker service (if available) ...
	I0802 17:44:39.623836   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 17:44:39.637348   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 17:44:39.650294   23378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 17:44:39.756801   23378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 17:44:39.923019   23378 docker.go:233] disabling docker service ...
	I0802 17:44:39.923080   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 17:44:39.936516   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 17:44:39.948188   23378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 17:44:40.080438   23378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 17:44:40.210892   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 17:44:40.223531   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:44:40.240537   23378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 17:44:40.240619   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.249975   23378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 17:44:40.250029   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.259558   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.268600   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.277635   23378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 17:44:40.286932   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.295795   23378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.310995   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.320007   23378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 17:44:40.328202   23378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 17:44:40.328250   23378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 17:44:40.339337   23378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 17:44:40.348015   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:44:40.464729   23378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 17:44:40.594497   23378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 17:44:40.594590   23378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 17:44:40.602160   23378 start.go:563] Will wait 60s for crictl version
	I0802 17:44:40.602208   23378 ssh_runner.go:195] Run: which crictl
	I0802 17:44:40.605735   23378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 17:44:40.639247   23378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 17:44:40.639336   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:44:40.665526   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:44:40.695767   23378 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 17:44:40.697068   23378 out.go:177]   - env NO_PROXY=192.168.39.210
	I0802 17:44:40.698166   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:44:40.700893   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:40.701259   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:40.701277   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:40.701456   23378 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 17:44:40.705310   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:44:40.717053   23378 mustload.go:65] Loading cluster: ha-652395
	I0802 17:44:40.717224   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:44:40.717523   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:40.717561   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:40.732668   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0802 17:44:40.733146   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:40.733614   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:40.733637   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:40.733935   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:40.734106   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:44:40.735587   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:44:40.735855   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:40.735886   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:40.750415   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0802 17:44:40.750836   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:40.751334   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:40.751359   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:40.751671   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:40.751897   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:44:40.752040   23378 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395 for IP: 192.168.39.220
	I0802 17:44:40.752049   23378 certs.go:194] generating shared ca certs ...
	I0802 17:44:40.752062   23378 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:44:40.752173   23378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 17:44:40.752208   23378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 17:44:40.752217   23378 certs.go:256] generating profile certs ...
	I0802 17:44:40.752288   23378 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key
	I0802 17:44:40.752312   23378 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.cf86fe99
	I0802 17:44:40.752323   23378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.cf86fe99 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.210 192.168.39.220 192.168.39.254]
	I0802 17:44:40.937178   23378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.cf86fe99 ...
	I0802 17:44:40.937208   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.cf86fe99: {Name:mk49cecd55ad68f4b0a4a86e8e819e8a12c316a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:44:40.937394   23378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.cf86fe99 ...
	I0802 17:44:40.937408   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.cf86fe99: {Name:mk536771078b4c1dcd616008289f4b5227c528ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:44:40.937478   23378 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.cf86fe99 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt
	I0802 17:44:40.937624   23378 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.cf86fe99 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key
	I0802 17:44:40.937757   23378 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key
	I0802 17:44:40.937774   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0802 17:44:40.937787   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0802 17:44:40.937805   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0802 17:44:40.937820   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0802 17:44:40.937835   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0802 17:44:40.937851   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0802 17:44:40.937866   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0802 17:44:40.937877   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0802 17:44:40.937922   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 17:44:40.937953   23378 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 17:44:40.937964   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 17:44:40.937989   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 17:44:40.938016   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 17:44:40.938040   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 17:44:40.938080   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:44:40.938111   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem -> /usr/share/ca-certificates/12547.pem
	I0802 17:44:40.938145   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /usr/share/ca-certificates/125472.pem
	I0802 17:44:40.938160   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:44:40.938197   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:44:40.941249   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:40.941603   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:44:40.941624   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:40.941813   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:44:40.942023   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:44:40.942176   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:44:40.942313   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:44:41.015442   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0802 17:44:41.020810   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0802 17:44:41.031202   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0802 17:44:41.035560   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0802 17:44:41.046604   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0802 17:44:41.050405   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0802 17:44:41.060839   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0802 17:44:41.065156   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0802 17:44:41.075342   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0802 17:44:41.079408   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0802 17:44:41.090157   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0802 17:44:41.094084   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0802 17:44:41.104144   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 17:44:41.126799   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 17:44:41.150692   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 17:44:41.173258   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 17:44:41.199287   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0802 17:44:41.224657   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 17:44:41.252426   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 17:44:41.275051   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 17:44:41.296731   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 17:44:41.318223   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 17:44:41.339361   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 17:44:41.360210   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0802 17:44:41.375805   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0802 17:44:41.391015   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0802 17:44:41.406187   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0802 17:44:41.421086   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0802 17:44:41.435625   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0802 17:44:41.450732   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0802 17:44:41.466404   23378 ssh_runner.go:195] Run: openssl version
	I0802 17:44:41.471656   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 17:44:41.481222   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 17:44:41.485089   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 17:44:41.485127   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 17:44:41.490157   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 17:44:41.499377   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 17:44:41.508987   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 17:44:41.512907   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 17:44:41.512959   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 17:44:41.518247   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 17:44:41.527658   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 17:44:41.539710   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:44:41.544001   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:44:41.544053   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:44:41.549106   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 17:44:41.558711   23378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 17:44:41.562637   23378 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 17:44:41.562683   23378 kubeadm.go:934] updating node {m02 192.168.39.220 8443 v1.30.3 crio true true} ...
	I0802 17:44:41.562771   23378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-652395-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 17:44:41.562801   23378 kube-vip.go:115] generating kube-vip config ...
	I0802 17:44:41.562843   23378 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0802 17:44:41.579868   23378 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0802 17:44:41.579940   23378 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0802 17:44:41.579993   23378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 17:44:41.589299   23378 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0802 17:44:41.589376   23378 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0802 17:44:41.598147   23378 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0802 17:44:41.598174   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0802 17:44:41.598227   23378 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0802 17:44:41.598243   23378 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0802 17:44:41.598269   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0802 17:44:41.602252   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0802 17:44:41.602274   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0802 17:44:47.952915   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:44:47.967335   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0802 17:44:47.967428   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0802 17:44:47.971470   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0802 17:44:47.971510   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0802 17:44:49.758724   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0802 17:44:49.758815   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0802 17:44:49.763825   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0802 17:44:49.763897   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0802 17:44:49.987338   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0802 17:44:49.996030   23378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0802 17:44:50.012191   23378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 17:44:50.027095   23378 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0802 17:44:50.042184   23378 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0802 17:44:50.045750   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:44:50.056837   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:44:50.186311   23378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:44:50.203539   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:44:50.203985   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:50.204036   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:50.219116   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46255
	I0802 17:44:50.219550   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:50.220062   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:50.220077   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:50.220412   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:50.220611   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:44:50.220760   23378 start.go:317] joinCluster: &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:44:50.220854   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0802 17:44:50.220875   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:44:50.223780   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:50.224134   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:44:50.224164   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:50.224277   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:44:50.224441   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:44:50.224578   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:44:50.224725   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:44:50.385078   23378 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:44:50.385119   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0pq2v.9gfsnqj2az7qhpq0 --discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-652395-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443"
	I0802 17:45:12.471079   23378 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0pq2v.9gfsnqj2az7qhpq0 --discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-652395-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443": (22.085935956s)
	I0802 17:45:12.471132   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0802 17:45:12.973256   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-652395-m02 minikube.k8s.io/updated_at=2024_08_02T17_45_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=ha-652395 minikube.k8s.io/primary=false
	I0802 17:45:13.095420   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-652395-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0802 17:45:13.220320   23378 start.go:319] duration metric: took 22.999556113s to joinCluster
	I0802 17:45:13.220413   23378 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:45:13.220724   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:45:13.221807   23378 out.go:177] * Verifying Kubernetes components...
	I0802 17:45:13.223081   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:45:13.485242   23378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:45:13.518043   23378 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:45:13.518398   23378 kapi.go:59] client config for ha-652395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key", CAFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0802 17:45:13.518489   23378 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.210:8443
	I0802 17:45:13.518746   23378 node_ready.go:35] waiting up to 6m0s for node "ha-652395-m02" to be "Ready" ...
	I0802 17:45:13.518858   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:13.518871   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:13.518882   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:13.518892   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:13.548732   23378 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0802 17:45:14.019746   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:14.019774   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:14.019786   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:14.019794   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:14.027852   23378 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0802 17:45:14.519852   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:14.519870   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:14.519879   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:14.519882   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:14.531836   23378 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0802 17:45:15.019756   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:15.019782   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:15.019796   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:15.019802   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:15.023265   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:15.519640   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:15.519661   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:15.519668   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:15.519673   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:15.522645   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:15.523350   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:16.019475   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:16.019499   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:16.019511   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:16.019581   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:16.022465   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:16.519379   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:16.519406   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:16.519414   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:16.519417   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:16.545704   23378 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0802 17:45:17.019706   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:17.019731   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:17.019743   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:17.019749   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:17.022789   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:17.519541   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:17.519562   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:17.519571   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:17.519577   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:17.524279   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:45:17.525076   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:18.018940   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:18.018961   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:18.018968   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:18.018971   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:18.022419   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:18.519766   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:18.519790   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:18.519800   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:18.519806   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:18.523269   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:19.019258   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:19.019280   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:19.019291   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:19.019295   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:19.022515   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:19.519292   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:19.519318   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:19.519329   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:19.519335   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:19.522831   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:20.019888   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:20.019911   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:20.019919   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:20.019924   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:20.097223   23378 round_trippers.go:574] Response Status: 200 OK in 77 milliseconds
	I0802 17:45:20.098490   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:20.519346   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:20.519367   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:20.519375   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:20.519380   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:20.522553   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:21.019724   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:21.019744   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:21.019752   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:21.019756   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:21.023258   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:21.519006   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:21.519030   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:21.519038   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:21.519042   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:21.522058   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:22.019027   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:22.019057   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:22.019070   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:22.019078   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:22.022631   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:22.518979   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:22.519004   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:22.519015   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:22.519020   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:22.522160   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:22.523156   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:23.019167   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:23.019191   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:23.019205   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:23.019209   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:23.022648   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:23.519023   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:23.519046   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:23.519054   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:23.519058   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:23.522488   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:24.019356   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:24.019378   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:24.019387   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:24.019391   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:24.022642   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:24.519672   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:24.519693   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:24.519704   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:24.519709   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:24.522892   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:24.523502   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:25.019971   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:25.019996   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:25.020004   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:25.020008   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:25.023202   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:25.519619   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:25.519641   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:25.519648   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:25.519654   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:25.522907   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:26.019939   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:26.019963   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:26.019970   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:26.019975   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:26.023421   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:26.519530   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:26.519552   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:26.519560   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:26.519563   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:26.522693   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:27.019819   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:27.019840   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:27.019848   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:27.019853   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:27.023031   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:27.023498   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:27.519213   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:27.519239   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:27.519249   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:27.519255   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:27.523180   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:28.019899   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:28.019918   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:28.019926   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:28.019929   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:28.023009   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:28.519448   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:28.519473   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:28.519481   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:28.519487   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:28.522731   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:29.019738   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:29.019764   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:29.019774   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:29.019780   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:29.023263   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:29.023781   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:29.519127   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:29.519156   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:29.519165   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:29.519177   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:29.522294   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:30.018921   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:30.018945   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:30.018952   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:30.018957   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:30.021949   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:30.519412   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:30.519433   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:30.519441   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:30.519444   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:30.522558   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:31.019167   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:31.019193   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:31.019202   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:31.019209   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:31.021946   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:31.519950   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:31.519976   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:31.519983   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:31.519986   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:31.523156   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:31.523846   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:32.019168   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:32.019192   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.019202   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.019206   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.022636   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:32.023238   23378 node_ready.go:49] node "ha-652395-m02" has status "Ready":"True"
	I0802 17:45:32.023263   23378 node_ready.go:38] duration metric: took 18.504493823s for node "ha-652395-m02" to be "Ready" ...
	I0802 17:45:32.023276   23378 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:45:32.023364   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:45:32.023376   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.023387   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.023393   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.027721   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:45:32.033894   23378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.033970   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7bnn4
	I0802 17:45:32.033979   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.033987   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.033991   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.036511   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.037139   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:32.037159   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.037170   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.037177   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.039503   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.040279   23378 pod_ready.go:92] pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.040307   23378 pod_ready.go:81] duration metric: took 6.388729ms for pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.040321   23378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.040384   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gzmsx
	I0802 17:45:32.040397   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.040407   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.040416   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.042585   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.043300   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:32.043316   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.043323   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.043327   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.045476   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.045919   23378 pod_ready.go:92] pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.045936   23378 pod_ready.go:81] duration metric: took 5.60755ms for pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.045944   23378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.045985   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395
	I0802 17:45:32.045992   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.045999   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.046002   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.047897   23378 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0802 17:45:32.048387   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:32.048401   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.048408   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.048412   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.050267   23378 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0802 17:45:32.050713   23378 pod_ready.go:92] pod "etcd-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.050732   23378 pod_ready.go:81] duration metric: took 4.781908ms for pod "etcd-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.050743   23378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.050845   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395-m02
	I0802 17:45:32.050857   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.050866   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.050873   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.052891   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.053386   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:32.053399   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.053409   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.053415   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.055225   23378 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0802 17:45:32.055582   23378 pod_ready.go:92] pod "etcd-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.055597   23378 pod_ready.go:81] duration metric: took 4.847646ms for pod "etcd-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.055613   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.219994   23378 request.go:629] Waited for 164.311449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395
	I0802 17:45:32.220046   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395
	I0802 17:45:32.220051   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.220059   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.220062   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.223269   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:32.419314   23378 request.go:629] Waited for 195.314796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:32.419367   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:32.419372   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.419379   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.419383   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.422144   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.422635   23378 pod_ready.go:92] pod "kube-apiserver-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.422653   23378 pod_ready.go:81] duration metric: took 367.032422ms for pod "kube-apiserver-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.422665   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.619816   23378 request.go:629] Waited for 197.083521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m02
	I0802 17:45:32.619891   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m02
	I0802 17:45:32.619898   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.619938   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.619947   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.623539   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:32.819747   23378 request.go:629] Waited for 195.467246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:32.819815   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:32.819829   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.819841   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.819849   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.822359   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.822831   23378 pod_ready.go:92] pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.822849   23378 pod_ready.go:81] duration metric: took 400.175771ms for pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.822862   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:33.019985   23378 request.go:629] Waited for 197.042473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395
	I0802 17:45:33.020039   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395
	I0802 17:45:33.020045   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:33.020053   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:33.020058   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:33.023333   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:33.219330   23378 request.go:629] Waited for 195.37121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:33.219395   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:33.219402   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:33.219415   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:33.219421   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:33.222461   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:33.222970   23378 pod_ready.go:92] pod "kube-controller-manager-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:33.222989   23378 pod_ready.go:81] duration metric: took 400.118179ms for pod "kube-controller-manager-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:33.223001   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:33.420137   23378 request.go:629] Waited for 197.048244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m02
	I0802 17:45:33.420225   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m02
	I0802 17:45:33.420236   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:33.420247   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:33.420256   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:33.423944   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:33.619883   23378 request.go:629] Waited for 195.369597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:33.619962   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:33.619969   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:33.619980   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:33.619990   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:33.623435   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:33.623841   23378 pod_ready.go:92] pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:33.623857   23378 pod_ready.go:81] duration metric: took 400.845557ms for pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:33.623869   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l7npk" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:33.820053   23378 request.go:629] Waited for 196.116391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7npk
	I0802 17:45:33.820133   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7npk
	I0802 17:45:33.820139   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:33.820147   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:33.820152   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:33.822992   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:34.019973   23378 request.go:629] Waited for 196.348436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:34.020037   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:34.020045   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:34.020057   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:34.020062   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:34.023256   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:34.023795   23378 pod_ready.go:92] pod "kube-proxy-l7npk" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:34.023812   23378 pod_ready.go:81] duration metric: took 399.936451ms for pod "kube-proxy-l7npk" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:34.023822   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rtbb6" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:34.219932   23378 request.go:629] Waited for 196.048785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtbb6
	I0802 17:45:34.220019   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtbb6
	I0802 17:45:34.220030   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:34.220041   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:34.220048   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:34.222994   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:34.419914   23378 request.go:629] Waited for 196.363004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:34.419967   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:34.419972   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:34.419980   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:34.419984   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:34.423711   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:34.424305   23378 pod_ready.go:92] pod "kube-proxy-rtbb6" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:34.424351   23378 pod_ready.go:81] duration metric: took 400.520107ms for pod "kube-proxy-rtbb6" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:34.424369   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:34.619408   23378 request.go:629] Waited for 194.97766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395
	I0802 17:45:34.619493   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395
	I0802 17:45:34.619504   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:34.619515   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:34.619522   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:34.622283   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:34.819200   23378 request.go:629] Waited for 196.25146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:34.819285   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:34.819296   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:34.819306   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:34.819320   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:34.822755   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:34.823703   23378 pod_ready.go:92] pod "kube-scheduler-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:34.823724   23378 pod_ready.go:81] duration metric: took 399.347186ms for pod "kube-scheduler-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:34.823736   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:35.019687   23378 request.go:629] Waited for 195.881363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m02
	I0802 17:45:35.019743   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m02
	I0802 17:45:35.019748   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.019758   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.019765   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.023283   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:35.220230   23378 request.go:629] Waited for 196.388546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:35.220284   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:35.220290   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.220300   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.220306   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.223673   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:35.224098   23378 pod_ready.go:92] pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:35.224115   23378 pod_ready.go:81] duration metric: took 400.371867ms for pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:35.224125   23378 pod_ready.go:38] duration metric: took 3.200833837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:45:35.224138   23378 api_server.go:52] waiting for apiserver process to appear ...
	I0802 17:45:35.224194   23378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:45:35.240089   23378 api_server.go:72] duration metric: took 22.019638509s to wait for apiserver process to appear ...
	I0802 17:45:35.240112   23378 api_server.go:88] waiting for apiserver healthz status ...
	I0802 17:45:35.240131   23378 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0802 17:45:35.244269   23378 api_server.go:279] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0802 17:45:35.244336   23378 round_trippers.go:463] GET https://192.168.39.210:8443/version
	I0802 17:45:35.244343   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.244351   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.244355   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.245181   23378 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0802 17:45:35.245275   23378 api_server.go:141] control plane version: v1.30.3
	I0802 17:45:35.245292   23378 api_server.go:131] duration metric: took 5.174481ms to wait for apiserver health ...
	I0802 17:45:35.245300   23378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 17:45:35.419746   23378 request.go:629] Waited for 174.36045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:45:35.419813   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:45:35.419818   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.419825   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.419830   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.424825   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:45:35.429454   23378 system_pods.go:59] 17 kube-system pods found
	I0802 17:45:35.429480   23378 system_pods.go:61] "coredns-7db6d8ff4d-7bnn4" [b4eedd91-fcf6-4cef-81b0-d043c38cc00c] Running
	I0802 17:45:35.429485   23378 system_pods.go:61] "coredns-7db6d8ff4d-gzmsx" [f5baa21b-dddf-43b6-a5a2-2b8f8e452a83] Running
	I0802 17:45:35.429489   23378 system_pods.go:61] "etcd-ha-652395" [221bc5ed-c9a4-41ee-8294-965ad8f9165a] Running
	I0802 17:45:35.429492   23378 system_pods.go:61] "etcd-ha-652395-m02" [92e40550-4a35-4769-a0a7-6a6d5c192af8] Running
	I0802 17:45:35.429495   23378 system_pods.go:61] "kindnet-7n2wh" [33a684f1-19a3-472e-ba29-d1fae4edab93] Running
	I0802 17:45:35.429498   23378 system_pods.go:61] "kindnet-bjrkb" [04d82e24-8aa1-4c71-b904-03b53de10142] Running
	I0802 17:45:35.429501   23378 system_pods.go:61] "kube-apiserver-ha-652395" [d004ddbd-7ea1-4702-ac84-3681621c7a13] Running
	I0802 17:45:35.429505   23378 system_pods.go:61] "kube-apiserver-ha-652395-m02" [a1dc5d2f-2a1c-4853-a83e-05f665ee4f00] Running
	I0802 17:45:35.429508   23378 system_pods.go:61] "kube-controller-manager-ha-652395" [e2ecf3df-c8af-4407-84a4-bfd052a3f5aa] Running
	I0802 17:45:35.429511   23378 system_pods.go:61] "kube-controller-manager-ha-652395-m02" [f2761a4e-d3dd-434f-b717-094d0b53d1cb] Running
	I0802 17:45:35.429514   23378 system_pods.go:61] "kube-proxy-l7npk" [8db2cf39-da2a-42f7-8f34-6cd8f61d0b08] Running
	I0802 17:45:35.429517   23378 system_pods.go:61] "kube-proxy-rtbb6" [4e5ce587-0e3a-4cae-9358-66ceaaf05f58] Running
	I0802 17:45:35.429520   23378 system_pods.go:61] "kube-scheduler-ha-652395" [6dec3f93-8fa3-4045-8e81-deec2cc26ae6] Running
	I0802 17:45:35.429523   23378 system_pods.go:61] "kube-scheduler-ha-652395-m02" [dd4ed827-ccf7-4f23-8a1d-0823cde7e577] Running
	I0802 17:45:35.429526   23378 system_pods.go:61] "kube-vip-ha-652395" [1ee810a9-9d93-4cff-a5bb-60bab005eb5c] Running
	I0802 17:45:35.429528   23378 system_pods.go:61] "kube-vip-ha-652395-m02" [e16bf714-b09a-490d-80ad-73f7a4b71c27] Running
	I0802 17:45:35.429531   23378 system_pods.go:61] "storage-provisioner" [149760da-f585-48bf-9cc8-63ff848cf3c8] Running
	I0802 17:45:35.429536   23378 system_pods.go:74] duration metric: took 184.22892ms to wait for pod list to return data ...
	I0802 17:45:35.429544   23378 default_sa.go:34] waiting for default service account to be created ...
	I0802 17:45:35.620020   23378 request.go:629] Waited for 190.404655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/default/serviceaccounts
	I0802 17:45:35.620077   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/default/serviceaccounts
	I0802 17:45:35.620083   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.620091   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.620097   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.623444   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:35.623671   23378 default_sa.go:45] found service account: "default"
	I0802 17:45:35.623688   23378 default_sa.go:55] duration metric: took 194.138636ms for default service account to be created ...
	I0802 17:45:35.623696   23378 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 17:45:35.819867   23378 request.go:629] Waited for 196.105859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:45:35.819953   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:45:35.819965   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.819975   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.819981   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.825590   23378 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0802 17:45:35.829401   23378 system_pods.go:86] 17 kube-system pods found
	I0802 17:45:35.829428   23378 system_pods.go:89] "coredns-7db6d8ff4d-7bnn4" [b4eedd91-fcf6-4cef-81b0-d043c38cc00c] Running
	I0802 17:45:35.829434   23378 system_pods.go:89] "coredns-7db6d8ff4d-gzmsx" [f5baa21b-dddf-43b6-a5a2-2b8f8e452a83] Running
	I0802 17:45:35.829438   23378 system_pods.go:89] "etcd-ha-652395" [221bc5ed-c9a4-41ee-8294-965ad8f9165a] Running
	I0802 17:45:35.829443   23378 system_pods.go:89] "etcd-ha-652395-m02" [92e40550-4a35-4769-a0a7-6a6d5c192af8] Running
	I0802 17:45:35.829448   23378 system_pods.go:89] "kindnet-7n2wh" [33a684f1-19a3-472e-ba29-d1fae4edab93] Running
	I0802 17:45:35.829452   23378 system_pods.go:89] "kindnet-bjrkb" [04d82e24-8aa1-4c71-b904-03b53de10142] Running
	I0802 17:45:35.829455   23378 system_pods.go:89] "kube-apiserver-ha-652395" [d004ddbd-7ea1-4702-ac84-3681621c7a13] Running
	I0802 17:45:35.829460   23378 system_pods.go:89] "kube-apiserver-ha-652395-m02" [a1dc5d2f-2a1c-4853-a83e-05f665ee4f00] Running
	I0802 17:45:35.829463   23378 system_pods.go:89] "kube-controller-manager-ha-652395" [e2ecf3df-c8af-4407-84a4-bfd052a3f5aa] Running
	I0802 17:45:35.829467   23378 system_pods.go:89] "kube-controller-manager-ha-652395-m02" [f2761a4e-d3dd-434f-b717-094d0b53d1cb] Running
	I0802 17:45:35.829471   23378 system_pods.go:89] "kube-proxy-l7npk" [8db2cf39-da2a-42f7-8f34-6cd8f61d0b08] Running
	I0802 17:45:35.829474   23378 system_pods.go:89] "kube-proxy-rtbb6" [4e5ce587-0e3a-4cae-9358-66ceaaf05f58] Running
	I0802 17:45:35.829479   23378 system_pods.go:89] "kube-scheduler-ha-652395" [6dec3f93-8fa3-4045-8e81-deec2cc26ae6] Running
	I0802 17:45:35.829482   23378 system_pods.go:89] "kube-scheduler-ha-652395-m02" [dd4ed827-ccf7-4f23-8a1d-0823cde7e577] Running
	I0802 17:45:35.829489   23378 system_pods.go:89] "kube-vip-ha-652395" [1ee810a9-9d93-4cff-a5bb-60bab005eb5c] Running
	I0802 17:45:35.829492   23378 system_pods.go:89] "kube-vip-ha-652395-m02" [e16bf714-b09a-490d-80ad-73f7a4b71c27] Running
	I0802 17:45:35.829495   23378 system_pods.go:89] "storage-provisioner" [149760da-f585-48bf-9cc8-63ff848cf3c8] Running
	I0802 17:45:35.829501   23378 system_pods.go:126] duration metric: took 205.801478ms to wait for k8s-apps to be running ...
	I0802 17:45:35.829511   23378 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 17:45:35.829552   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:45:35.844416   23378 system_svc.go:56] duration metric: took 14.896551ms WaitForService to wait for kubelet
	I0802 17:45:35.844449   23378 kubeadm.go:582] duration metric: took 22.624001927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:45:35.844472   23378 node_conditions.go:102] verifying NodePressure condition ...
	I0802 17:45:36.019899   23378 request.go:629] Waited for 175.358786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes
	I0802 17:45:36.019973   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes
	I0802 17:45:36.019979   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:36.019986   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:36.019991   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:36.022913   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:36.023637   23378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:45:36.023657   23378 node_conditions.go:123] node cpu capacity is 2
	I0802 17:45:36.023667   23378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:45:36.023670   23378 node_conditions.go:123] node cpu capacity is 2
	I0802 17:45:36.023674   23378 node_conditions.go:105] duration metric: took 179.19768ms to run NodePressure ...
	I0802 17:45:36.023684   23378 start.go:241] waiting for startup goroutines ...
	I0802 17:45:36.023707   23378 start.go:255] writing updated cluster config ...
	I0802 17:45:36.025800   23378 out.go:177] 
	I0802 17:45:36.027316   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:45:36.027411   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:45:36.028935   23378 out.go:177] * Starting "ha-652395-m03" control-plane node in "ha-652395" cluster
	I0802 17:45:36.030014   23378 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:45:36.030042   23378 cache.go:56] Caching tarball of preloaded images
	I0802 17:45:36.030149   23378 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 17:45:36.030162   23378 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 17:45:36.030247   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:45:36.030437   23378 start.go:360] acquireMachinesLock for ha-652395-m03: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 17:45:36.030483   23378 start.go:364] duration metric: took 24.923µs to acquireMachinesLock for "ha-652395-m03"
	I0802 17:45:36.030501   23378 start.go:93] Provisioning new machine with config: &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:45:36.030592   23378 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0802 17:45:36.032070   23378 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 17:45:36.032163   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:45:36.032197   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:36.047016   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0802 17:45:36.047629   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:36.048162   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:45:36.048186   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:36.048493   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:36.048684   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetMachineName
	I0802 17:45:36.048823   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:45:36.048964   23378 start.go:159] libmachine.API.Create for "ha-652395" (driver="kvm2")
	I0802 17:45:36.048994   23378 client.go:168] LocalClient.Create starting
	I0802 17:45:36.049027   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 17:45:36.049068   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:45:36.049089   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:45:36.049157   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 17:45:36.049198   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:45:36.049214   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:45:36.049233   23378 main.go:141] libmachine: Running pre-create checks...
	I0802 17:45:36.049242   23378 main.go:141] libmachine: (ha-652395-m03) Calling .PreCreateCheck
	I0802 17:45:36.049413   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetConfigRaw
	I0802 17:45:36.049924   23378 main.go:141] libmachine: Creating machine...
	I0802 17:45:36.049938   23378 main.go:141] libmachine: (ha-652395-m03) Calling .Create
	I0802 17:45:36.050035   23378 main.go:141] libmachine: (ha-652395-m03) Creating KVM machine...
	I0802 17:45:36.051210   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found existing default KVM network
	I0802 17:45:36.051359   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found existing private KVM network mk-ha-652395
	I0802 17:45:36.051513   23378 main.go:141] libmachine: (ha-652395-m03) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03 ...
	I0802 17:45:36.051537   23378 main.go:141] libmachine: (ha-652395-m03) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 17:45:36.051582   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:36.051497   24173 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:45:36.051645   23378 main.go:141] libmachine: (ha-652395-m03) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 17:45:36.283642   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:36.283510   24173 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa...
	I0802 17:45:36.404288   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:36.404161   24173 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/ha-652395-m03.rawdisk...
	I0802 17:45:36.404325   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Writing magic tar header
	I0802 17:45:36.404340   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Writing SSH key tar header
	I0802 17:45:36.404367   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:36.404314   24173 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03 ...
	I0802 17:45:36.404478   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03
	I0802 17:45:36.404506   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 17:45:36.404522   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03 (perms=drwx------)
	I0802 17:45:36.404541   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 17:45:36.404555   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 17:45:36.404580   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:45:36.404598   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 17:45:36.404608   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 17:45:36.404623   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 17:45:36.404631   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins
	I0802 17:45:36.404641   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home
	I0802 17:45:36.404658   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Skipping /home - not owner
	I0802 17:45:36.404672   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 17:45:36.404688   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 17:45:36.404699   23378 main.go:141] libmachine: (ha-652395-m03) Creating domain...
	I0802 17:45:36.405773   23378 main.go:141] libmachine: (ha-652395-m03) define libvirt domain using xml: 
	I0802 17:45:36.405799   23378 main.go:141] libmachine: (ha-652395-m03) <domain type='kvm'>
	I0802 17:45:36.405811   23378 main.go:141] libmachine: (ha-652395-m03)   <name>ha-652395-m03</name>
	I0802 17:45:36.405818   23378 main.go:141] libmachine: (ha-652395-m03)   <memory unit='MiB'>2200</memory>
	I0802 17:45:36.405827   23378 main.go:141] libmachine: (ha-652395-m03)   <vcpu>2</vcpu>
	I0802 17:45:36.405837   23378 main.go:141] libmachine: (ha-652395-m03)   <features>
	I0802 17:45:36.405843   23378 main.go:141] libmachine: (ha-652395-m03)     <acpi/>
	I0802 17:45:36.405850   23378 main.go:141] libmachine: (ha-652395-m03)     <apic/>
	I0802 17:45:36.405859   23378 main.go:141] libmachine: (ha-652395-m03)     <pae/>
	I0802 17:45:36.405866   23378 main.go:141] libmachine: (ha-652395-m03)     
	I0802 17:45:36.405876   23378 main.go:141] libmachine: (ha-652395-m03)   </features>
	I0802 17:45:36.405892   23378 main.go:141] libmachine: (ha-652395-m03)   <cpu mode='host-passthrough'>
	I0802 17:45:36.405924   23378 main.go:141] libmachine: (ha-652395-m03)   
	I0802 17:45:36.405968   23378 main.go:141] libmachine: (ha-652395-m03)   </cpu>
	I0802 17:45:36.405983   23378 main.go:141] libmachine: (ha-652395-m03)   <os>
	I0802 17:45:36.405995   23378 main.go:141] libmachine: (ha-652395-m03)     <type>hvm</type>
	I0802 17:45:36.406008   23378 main.go:141] libmachine: (ha-652395-m03)     <boot dev='cdrom'/>
	I0802 17:45:36.406027   23378 main.go:141] libmachine: (ha-652395-m03)     <boot dev='hd'/>
	I0802 17:45:36.406038   23378 main.go:141] libmachine: (ha-652395-m03)     <bootmenu enable='no'/>
	I0802 17:45:36.406044   23378 main.go:141] libmachine: (ha-652395-m03)   </os>
	I0802 17:45:36.406054   23378 main.go:141] libmachine: (ha-652395-m03)   <devices>
	I0802 17:45:36.406066   23378 main.go:141] libmachine: (ha-652395-m03)     <disk type='file' device='cdrom'>
	I0802 17:45:36.406083   23378 main.go:141] libmachine: (ha-652395-m03)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/boot2docker.iso'/>
	I0802 17:45:36.406095   23378 main.go:141] libmachine: (ha-652395-m03)       <target dev='hdc' bus='scsi'/>
	I0802 17:45:36.406106   23378 main.go:141] libmachine: (ha-652395-m03)       <readonly/>
	I0802 17:45:36.406123   23378 main.go:141] libmachine: (ha-652395-m03)     </disk>
	I0802 17:45:36.406137   23378 main.go:141] libmachine: (ha-652395-m03)     <disk type='file' device='disk'>
	I0802 17:45:36.406151   23378 main.go:141] libmachine: (ha-652395-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 17:45:36.406164   23378 main.go:141] libmachine: (ha-652395-m03)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/ha-652395-m03.rawdisk'/>
	I0802 17:45:36.406172   23378 main.go:141] libmachine: (ha-652395-m03)       <target dev='hda' bus='virtio'/>
	I0802 17:45:36.406182   23378 main.go:141] libmachine: (ha-652395-m03)     </disk>
	I0802 17:45:36.406190   23378 main.go:141] libmachine: (ha-652395-m03)     <interface type='network'>
	I0802 17:45:36.406196   23378 main.go:141] libmachine: (ha-652395-m03)       <source network='mk-ha-652395'/>
	I0802 17:45:36.406206   23378 main.go:141] libmachine: (ha-652395-m03)       <model type='virtio'/>
	I0802 17:45:36.406216   23378 main.go:141] libmachine: (ha-652395-m03)     </interface>
	I0802 17:45:36.406231   23378 main.go:141] libmachine: (ha-652395-m03)     <interface type='network'>
	I0802 17:45:36.406243   23378 main.go:141] libmachine: (ha-652395-m03)       <source network='default'/>
	I0802 17:45:36.406254   23378 main.go:141] libmachine: (ha-652395-m03)       <model type='virtio'/>
	I0802 17:45:36.406262   23378 main.go:141] libmachine: (ha-652395-m03)     </interface>
	I0802 17:45:36.406272   23378 main.go:141] libmachine: (ha-652395-m03)     <serial type='pty'>
	I0802 17:45:36.406278   23378 main.go:141] libmachine: (ha-652395-m03)       <target port='0'/>
	I0802 17:45:36.406284   23378 main.go:141] libmachine: (ha-652395-m03)     </serial>
	I0802 17:45:36.406290   23378 main.go:141] libmachine: (ha-652395-m03)     <console type='pty'>
	I0802 17:45:36.406302   23378 main.go:141] libmachine: (ha-652395-m03)       <target type='serial' port='0'/>
	I0802 17:45:36.406329   23378 main.go:141] libmachine: (ha-652395-m03)     </console>
	I0802 17:45:36.406348   23378 main.go:141] libmachine: (ha-652395-m03)     <rng model='virtio'>
	I0802 17:45:36.406363   23378 main.go:141] libmachine: (ha-652395-m03)       <backend model='random'>/dev/random</backend>
	I0802 17:45:36.406379   23378 main.go:141] libmachine: (ha-652395-m03)     </rng>
	I0802 17:45:36.406391   23378 main.go:141] libmachine: (ha-652395-m03)     
	I0802 17:45:36.406400   23378 main.go:141] libmachine: (ha-652395-m03)     
	I0802 17:45:36.406409   23378 main.go:141] libmachine: (ha-652395-m03)   </devices>
	I0802 17:45:36.406420   23378 main.go:141] libmachine: (ha-652395-m03) </domain>
	I0802 17:45:36.406441   23378 main.go:141] libmachine: (ha-652395-m03) 
	I0802 17:45:36.413279   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:43:36:db in network default
	I0802 17:45:36.413820   23378 main.go:141] libmachine: (ha-652395-m03) Ensuring networks are active...
	I0802 17:45:36.413862   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:36.414657   23378 main.go:141] libmachine: (ha-652395-m03) Ensuring network default is active
	I0802 17:45:36.414968   23378 main.go:141] libmachine: (ha-652395-m03) Ensuring network mk-ha-652395 is active
	I0802 17:45:36.415435   23378 main.go:141] libmachine: (ha-652395-m03) Getting domain xml...
	I0802 17:45:36.416067   23378 main.go:141] libmachine: (ha-652395-m03) Creating domain...
	I0802 17:45:37.658293   23378 main.go:141] libmachine: (ha-652395-m03) Waiting to get IP...
	I0802 17:45:37.659127   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:37.659538   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:37.659586   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:37.659533   24173 retry.go:31] will retry after 278.414041ms: waiting for machine to come up
	I0802 17:45:37.940057   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:37.940529   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:37.940562   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:37.940509   24173 retry.go:31] will retry after 280.874502ms: waiting for machine to come up
	I0802 17:45:38.223047   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:38.223534   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:38.223558   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:38.223511   24173 retry.go:31] will retry after 340.959076ms: waiting for machine to come up
	I0802 17:45:38.566122   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:38.566544   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:38.566567   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:38.566510   24173 retry.go:31] will retry after 573.792131ms: waiting for machine to come up
	I0802 17:45:39.142236   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:39.142669   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:39.142701   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:39.142606   24173 retry.go:31] will retry after 480.184052ms: waiting for machine to come up
	I0802 17:45:39.624228   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:39.624766   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:39.624794   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:39.624719   24173 retry.go:31] will retry after 640.998486ms: waiting for machine to come up
	I0802 17:45:40.267613   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:40.267998   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:40.268025   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:40.267953   24173 retry.go:31] will retry after 1.037547688s: waiting for machine to come up
	I0802 17:45:41.306919   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:41.307496   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:41.307524   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:41.307443   24173 retry.go:31] will retry after 1.487765562s: waiting for machine to come up
	I0802 17:45:42.796982   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:42.797437   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:42.797468   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:42.797389   24173 retry.go:31] will retry after 1.712646843s: waiting for machine to come up
	I0802 17:45:44.512180   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:44.512627   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:44.512655   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:44.512581   24173 retry.go:31] will retry after 2.117852157s: waiting for machine to come up
	I0802 17:45:46.632392   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:46.632797   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:46.632825   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:46.632740   24173 retry.go:31] will retry after 1.87779902s: waiting for machine to come up
	I0802 17:45:48.512236   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:48.512705   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:48.512731   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:48.512659   24173 retry.go:31] will retry after 2.645114759s: waiting for machine to come up
	I0802 17:45:51.159777   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:51.160216   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:51.160240   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:51.160201   24173 retry.go:31] will retry after 3.916763457s: waiting for machine to come up
	I0802 17:45:55.080334   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:55.080702   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:55.080728   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:55.080659   24173 retry.go:31] will retry after 4.726540914s: waiting for machine to come up
	I0802 17:45:59.810530   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:59.810997   23378 main.go:141] libmachine: (ha-652395-m03) Found IP for machine: 192.168.39.62
	I0802 17:45:59.811032   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has current primary IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:59.811041   23378 main.go:141] libmachine: (ha-652395-m03) Reserving static IP address...
	I0802 17:45:59.811400   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find host DHCP lease matching {name: "ha-652395-m03", mac: "52:54:00:23:60:5b", ip: "192.168.39.62"} in network mk-ha-652395
	I0802 17:45:59.884517   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Getting to WaitForSSH function...
	I0802 17:45:59.884551   23378 main.go:141] libmachine: (ha-652395-m03) Reserved static IP address: 192.168.39.62
	I0802 17:45:59.884566   23378 main.go:141] libmachine: (ha-652395-m03) Waiting for SSH to be available...
	I0802 17:45:59.887972   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:59.888390   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:23:60:5b}
	I0802 17:45:59.888430   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:59.888577   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Using SSH client type: external
	I0802 17:45:59.888599   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa (-rw-------)
	I0802 17:45:59.888629   23378 main.go:141] libmachine: (ha-652395-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 17:45:59.888641   23378 main.go:141] libmachine: (ha-652395-m03) DBG | About to run SSH command:
	I0802 17:45:59.888655   23378 main.go:141] libmachine: (ha-652395-m03) DBG | exit 0
	I0802 17:46:00.015258   23378 main.go:141] libmachine: (ha-652395-m03) DBG | SSH cmd err, output: <nil>: 
	I0802 17:46:00.015565   23378 main.go:141] libmachine: (ha-652395-m03) KVM machine creation complete!
	I0802 17:46:00.015949   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetConfigRaw
	I0802 17:46:00.016541   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:00.016754   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:00.016928   23378 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 17:46:00.016942   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetState
	I0802 17:46:00.018209   23378 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 17:46:00.018225   23378 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 17:46:00.018234   23378 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 17:46:00.018242   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.020481   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.020805   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.020830   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.020978   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:00.021123   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.021274   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.021372   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:00.021519   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:00.021771   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:00.021787   23378 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 17:46:00.126517   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:46:00.126552   23378 main.go:141] libmachine: Detecting the provisioner...
	I0802 17:46:00.126565   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.129422   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.129818   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.129863   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.129986   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:00.130170   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.130329   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.130493   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:00.130653   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:00.130820   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:00.130832   23378 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 17:46:00.239767   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 17:46:00.239864   23378 main.go:141] libmachine: found compatible host: buildroot
	I0802 17:46:00.239880   23378 main.go:141] libmachine: Provisioning with buildroot...
	I0802 17:46:00.239890   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetMachineName
	I0802 17:46:00.240107   23378 buildroot.go:166] provisioning hostname "ha-652395-m03"
	I0802 17:46:00.240134   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetMachineName
	I0802 17:46:00.240295   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.242732   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.243176   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.243203   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.243353   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:00.243521   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.243667   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.243786   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:00.243946   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:00.244098   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:00.244110   23378 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-652395-m03 && echo "ha-652395-m03" | sudo tee /etc/hostname
	I0802 17:46:00.365172   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-652395-m03
	
	I0802 17:46:00.365198   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.367989   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.368345   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.368367   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.368509   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:00.368726   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.368909   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.369054   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:00.369248   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:00.369421   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:00.369446   23378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-652395-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-652395-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-652395-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 17:46:00.484223   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:46:00.484256   23378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 17:46:00.484280   23378 buildroot.go:174] setting up certificates
	I0802 17:46:00.484290   23378 provision.go:84] configureAuth start
	I0802 17:46:00.484300   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetMachineName
	I0802 17:46:00.484588   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:46:00.487348   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.487676   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.487713   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.487867   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.490085   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.490431   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.490458   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.490591   23378 provision.go:143] copyHostCerts
	I0802 17:46:00.490631   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:46:00.490680   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 17:46:00.490691   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:46:00.490769   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 17:46:00.490952   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:46:00.490984   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 17:46:00.490993   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:46:00.491048   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 17:46:00.491135   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:46:00.491159   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 17:46:00.491168   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:46:00.491202   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 17:46:00.491269   23378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.ha-652395-m03 san=[127.0.0.1 192.168.39.62 ha-652395-m03 localhost minikube]
	I0802 17:46:00.884913   23378 provision.go:177] copyRemoteCerts
	I0802 17:46:00.884973   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 17:46:00.884998   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.888105   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.888518   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.888550   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.888766   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:00.888984   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.889229   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:00.889398   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:46:00.972704   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0802 17:46:00.972791   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0802 17:46:00.995560   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0802 17:46:00.995621   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 17:46:01.017657   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0802 17:46:01.017722   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 17:46:01.040053   23378 provision.go:87] duration metric: took 555.74644ms to configureAuth
	I0802 17:46:01.040086   23378 buildroot.go:189] setting minikube options for container-runtime
	I0802 17:46:01.040357   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:46:01.040467   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:01.043361   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.043739   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.043774   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.043894   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:01.044105   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.044265   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.044411   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:01.044579   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:01.044759   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:01.044772   23378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 17:46:01.311642   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 17:46:01.311677   23378 main.go:141] libmachine: Checking connection to Docker...
	I0802 17:46:01.311688   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetURL
	I0802 17:46:01.313011   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Using libvirt version 6000000
	I0802 17:46:01.315324   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.315713   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.315743   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.315993   23378 main.go:141] libmachine: Docker is up and running!
	I0802 17:46:01.316006   23378 main.go:141] libmachine: Reticulating splines...
	I0802 17:46:01.316012   23378 client.go:171] duration metric: took 25.267010388s to LocalClient.Create
	I0802 17:46:01.316034   23378 start.go:167] duration metric: took 25.267071211s to libmachine.API.Create "ha-652395"
	I0802 17:46:01.316048   23378 start.go:293] postStartSetup for "ha-652395-m03" (driver="kvm2")
	I0802 17:46:01.316058   23378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 17:46:01.316073   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:01.316307   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 17:46:01.316344   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:01.318593   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.318910   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.318935   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.319053   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:01.319231   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.319431   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:01.319684   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:46:01.401372   23378 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 17:46:01.405564   23378 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 17:46:01.405593   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 17:46:01.405666   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 17:46:01.405735   23378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 17:46:01.405744   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /etc/ssl/certs/125472.pem
	I0802 17:46:01.405819   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 17:46:01.416344   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:46:01.439311   23378 start.go:296] duration metric: took 123.247965ms for postStartSetup
	I0802 17:46:01.439392   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetConfigRaw
	I0802 17:46:01.439971   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:46:01.442873   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.443331   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.443362   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.443659   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:46:01.443867   23378 start.go:128] duration metric: took 25.413264333s to createHost
	I0802 17:46:01.443890   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:01.446191   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.446520   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.446552   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.446692   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:01.446864   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.447045   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.447228   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:01.447388   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:01.447534   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:01.447544   23378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 17:46:01.555441   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722620761.533849867
	
	I0802 17:46:01.555469   23378 fix.go:216] guest clock: 1722620761.533849867
	I0802 17:46:01.555482   23378 fix.go:229] Guest: 2024-08-02 17:46:01.533849867 +0000 UTC Remote: 2024-08-02 17:46:01.443878214 +0000 UTC m=+153.945590491 (delta=89.971653ms)
	I0802 17:46:01.555506   23378 fix.go:200] guest clock delta is within tolerance: 89.971653ms
	I0802 17:46:01.555514   23378 start.go:83] releasing machines lock for "ha-652395-m03", held for 25.52502111s
	I0802 17:46:01.555542   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:01.555795   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:46:01.558412   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.558778   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.558808   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.560907   23378 out.go:177] * Found network options:
	I0802 17:46:01.562135   23378 out.go:177]   - NO_PROXY=192.168.39.210,192.168.39.220
	W0802 17:46:01.563401   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	W0802 17:46:01.563424   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	I0802 17:46:01.563437   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:01.563984   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:01.564186   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:01.564285   23378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 17:46:01.564324   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	W0802 17:46:01.564412   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	W0802 17:46:01.564437   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	I0802 17:46:01.564500   23378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 17:46:01.564522   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:01.566998   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.567329   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.567356   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.567378   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.567560   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:01.567736   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.567819   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.567853   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.567899   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:01.568087   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:01.568093   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:46:01.568261   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.568420   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:01.568557   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:46:01.796482   23378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 17:46:01.802346   23378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 17:46:01.802418   23378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 17:46:01.821079   23378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 17:46:01.821100   23378 start.go:495] detecting cgroup driver to use...
	I0802 17:46:01.821156   23378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 17:46:01.837276   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:46:01.850195   23378 docker.go:217] disabling cri-docker service (if available) ...
	I0802 17:46:01.850246   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 17:46:01.863020   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 17:46:01.876817   23378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 17:46:01.996317   23378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 17:46:02.155795   23378 docker.go:233] disabling docker service ...
	I0802 17:46:02.155854   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 17:46:02.171577   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 17:46:02.185476   23378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 17:46:02.316663   23378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 17:46:02.441608   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 17:46:02.456599   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:46:02.474518   23378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 17:46:02.474602   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.484459   23378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 17:46:02.484524   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.493884   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.503576   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.513428   23378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 17:46:02.523479   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.532970   23378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.549805   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.559448   23378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 17:46:02.568425   23378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 17:46:02.568503   23378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 17:46:02.581992   23378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 17:46:02.591609   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:46:02.726113   23378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 17:46:02.874460   23378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 17:46:02.874528   23378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 17:46:02.879933   23378 start.go:563] Will wait 60s for crictl version
	I0802 17:46:02.879998   23378 ssh_runner.go:195] Run: which crictl
	I0802 17:46:02.883528   23378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 17:46:02.923272   23378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 17:46:02.923376   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:46:02.949589   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:46:02.979299   23378 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 17:46:02.980662   23378 out.go:177]   - env NO_PROXY=192.168.39.210
	I0802 17:46:02.981881   23378 out.go:177]   - env NO_PROXY=192.168.39.210,192.168.39.220
	I0802 17:46:02.982980   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:46:02.985700   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:02.986094   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:02.986121   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:02.986355   23378 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 17:46:02.990125   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:46:03.001417   23378 mustload.go:65] Loading cluster: ha-652395
	I0802 17:46:03.001685   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:46:03.002055   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:46:03.002102   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:46:03.017195   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0802 17:46:03.017622   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:46:03.018112   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:46:03.018135   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:46:03.018412   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:46:03.018590   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:46:03.020165   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:46:03.020466   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:46:03.020509   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:46:03.036320   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0802 17:46:03.036679   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:46:03.037087   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:46:03.037105   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:46:03.037410   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:46:03.037590   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:46:03.037752   23378 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395 for IP: 192.168.39.62
	I0802 17:46:03.037762   23378 certs.go:194] generating shared ca certs ...
	I0802 17:46:03.037775   23378 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:46:03.037885   23378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 17:46:03.037921   23378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 17:46:03.037929   23378 certs.go:256] generating profile certs ...
	I0802 17:46:03.037991   23378 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key
	I0802 17:46:03.038015   23378 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.dbe97182
	I0802 17:46:03.038026   23378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.dbe97182 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.210 192.168.39.220 192.168.39.62 192.168.39.254]
	I0802 17:46:03.165060   23378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.dbe97182 ...
	I0802 17:46:03.165090   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.dbe97182: {Name:mkbcf4904b96ff44c4fb2909d0c0c62a3672ca2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:46:03.165254   23378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.dbe97182 ...
	I0802 17:46:03.165265   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.dbe97182: {Name:mkd9fd8dcc922620ae47f15cba16ed6aa3bd324c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:46:03.165334   23378 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.dbe97182 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt
	I0802 17:46:03.165480   23378 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.dbe97182 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key
	I0802 17:46:03.165612   23378 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key
	I0802 17:46:03.165629   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0802 17:46:03.165642   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0802 17:46:03.165659   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0802 17:46:03.165678   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0802 17:46:03.165697   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0802 17:46:03.165715   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0802 17:46:03.165733   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0802 17:46:03.165751   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0802 17:46:03.165819   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 17:46:03.165858   23378 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 17:46:03.165865   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 17:46:03.165887   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 17:46:03.165909   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 17:46:03.165931   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 17:46:03.165967   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:46:03.165996   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:46:03.166009   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem -> /usr/share/ca-certificates/12547.pem
	I0802 17:46:03.166021   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /usr/share/ca-certificates/125472.pem
	I0802 17:46:03.166054   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:46:03.169127   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:46:03.169589   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:46:03.169623   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:46:03.169814   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:46:03.170145   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:46:03.170291   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:46:03.170518   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:46:03.251459   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0802 17:46:03.256083   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0802 17:46:03.267440   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0802 17:46:03.271046   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0802 17:46:03.280731   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0802 17:46:03.284525   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0802 17:46:03.293929   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0802 17:46:03.297873   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0802 17:46:03.307359   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0802 17:46:03.313412   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0802 17:46:03.322935   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0802 17:46:03.326564   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0802 17:46:03.335924   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 17:46:03.360122   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 17:46:03.384035   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 17:46:03.405619   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 17:46:03.428732   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0802 17:46:03.451179   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 17:46:03.472724   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 17:46:03.495092   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 17:46:03.519137   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 17:46:03.542671   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 17:46:03.564900   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 17:46:03.586591   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0802 17:46:03.602580   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0802 17:46:03.618040   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0802 17:46:03.633043   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0802 17:46:03.648771   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0802 17:46:03.664016   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0802 17:46:03.679357   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0802 17:46:03.695254   23378 ssh_runner.go:195] Run: openssl version
	I0802 17:46:03.700807   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 17:46:03.710353   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:46:03.714382   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:46:03.714436   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:46:03.720090   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 17:46:03.729674   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 17:46:03.739249   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 17:46:03.743193   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 17:46:03.743244   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 17:46:03.748385   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 17:46:03.757952   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 17:46:03.767280   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 17:46:03.771207   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 17:46:03.771248   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 17:46:03.776380   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 17:46:03.786123   23378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 17:46:03.789636   23378 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 17:46:03.789693   23378 kubeadm.go:934] updating node {m03 192.168.39.62 8443 v1.30.3 crio true true} ...
	I0802 17:46:03.789784   23378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-652395-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 17:46:03.789808   23378 kube-vip.go:115] generating kube-vip config ...
	I0802 17:46:03.789841   23378 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0802 17:46:03.807546   23378 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0802 17:46:03.807608   23378 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0802 17:46:03.807703   23378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 17:46:03.818185   23378 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0802 17:46:03.818229   23378 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0802 17:46:03.829488   23378 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0802 17:46:03.829497   23378 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0802 17:46:03.829512   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0802 17:46:03.829516   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0802 17:46:03.829536   23378 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0802 17:46:03.829571   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0802 17:46:03.829583   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:46:03.829571   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0802 17:46:03.844302   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0802 17:46:03.844337   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0802 17:46:03.844357   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0802 17:46:03.844379   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0802 17:46:03.844399   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0802 17:46:03.844408   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0802 17:46:03.865692   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0802 17:46:03.865736   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0802 17:46:04.683374   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0802 17:46:04.692482   23378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0802 17:46:04.708380   23378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 17:46:04.723672   23378 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0802 17:46:04.738690   23378 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0802 17:46:04.742181   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:46:04.753005   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:46:04.870718   23378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:46:04.887521   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:46:04.887970   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:46:04.888027   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:46:04.903924   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0802 17:46:04.904401   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:46:04.904877   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:46:04.904897   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:46:04.905212   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:46:04.905395   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:46:04.905538   23378 start.go:317] joinCluster: &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:46:04.905654   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0802 17:46:04.905667   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:46:04.908305   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:46:04.908844   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:46:04.908871   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:46:04.909014   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:46:04.909313   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:46:04.909513   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:46:04.909674   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:46:05.073630   23378 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:46:05.073675   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3vyejt.kbnmanrwnqax2ca9 --discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-652395-m03 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443"
	I0802 17:46:28.601554   23378 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3vyejt.kbnmanrwnqax2ca9 --discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-652395-m03 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443": (23.527855462s)
	I0802 17:46:28.601590   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0802 17:46:29.216420   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-652395-m03 minikube.k8s.io/updated_at=2024_08_02T17_46_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=ha-652395 minikube.k8s.io/primary=false
	I0802 17:46:29.336594   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-652395-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0802 17:46:29.452215   23378 start.go:319] duration metric: took 24.546671487s to joinCluster
	I0802 17:46:29.452292   23378 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:46:29.452629   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:46:29.453605   23378 out.go:177] * Verifying Kubernetes components...
	I0802 17:46:29.454946   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:46:29.703779   23378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:46:29.760927   23378 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:46:29.761307   23378 kapi.go:59] client config for ha-652395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key", CAFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0802 17:46:29.761394   23378 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.210:8443
	I0802 17:46:29.761649   23378 node_ready.go:35] waiting up to 6m0s for node "ha-652395-m03" to be "Ready" ...
	I0802 17:46:29.761745   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:29.761755   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:29.761767   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:29.761776   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:29.770376   23378 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0802 17:46:30.261866   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:30.261893   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:30.261904   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:30.261911   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:30.265682   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:30.762410   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:30.762435   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:30.762444   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:30.762451   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:30.765614   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:31.262713   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:31.262736   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:31.262744   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:31.262754   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:31.267327   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:31.761938   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:31.761968   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:31.761982   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:31.761987   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:31.764863   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:31.765340   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:32.262817   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:32.262840   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:32.262851   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:32.262857   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:32.266230   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:32.762068   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:32.762087   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:32.762095   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:32.762098   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:32.765625   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:33.261868   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:33.261888   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:33.261897   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:33.261902   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:33.265036   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:33.762209   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:33.762229   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:33.762236   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:33.762239   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:33.766920   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:33.767820   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:34.262870   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:34.262889   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:34.262897   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:34.262900   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:34.266363   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:34.762163   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:34.762185   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:34.762193   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:34.762197   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:34.765390   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:35.262210   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:35.262233   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:35.262244   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:35.262251   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:35.265436   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:35.761831   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:35.761850   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:35.761859   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:35.761865   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:35.771561   23378 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0802 17:46:35.772661   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:36.261968   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:36.261991   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:36.262002   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:36.262007   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:36.265294   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:36.762238   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:36.762263   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:36.762278   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:36.762284   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:36.765814   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:37.262760   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:37.262786   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:37.262796   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:37.262801   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:37.266752   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:37.762694   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:37.762716   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:37.762726   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:37.762733   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:37.765685   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:38.261881   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:38.261911   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:38.261922   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:38.261927   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:38.266922   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:38.267761   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:38.762572   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:38.762601   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:38.762611   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:38.762616   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:38.765699   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:39.262553   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:39.262576   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:39.262585   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:39.262589   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:39.265635   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:39.762404   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:39.762428   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:39.762439   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:39.762445   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:39.766257   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:40.261822   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:40.261844   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:40.261851   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:40.261856   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:40.265926   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:40.762356   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:40.762374   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:40.762384   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:40.762388   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:40.766293   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:40.766891   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:41.262203   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:41.262226   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:41.262237   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:41.262242   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:41.266069   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:41.761929   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:41.761957   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:41.761968   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:41.761974   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:41.765904   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:42.261878   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:42.261902   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:42.261910   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:42.261913   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:42.265102   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:42.762829   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:42.762853   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:42.762865   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:42.762869   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:42.766809   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:42.767367   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:43.262326   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:43.262347   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:43.262355   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:43.262359   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:43.266042   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:43.762046   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:43.762067   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:43.762075   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:43.762079   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:43.765536   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:44.262774   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:44.262798   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:44.262807   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:44.262812   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:44.266011   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:44.761948   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:44.761972   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:44.761983   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:44.761997   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:44.765716   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:45.262435   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:45.262454   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:45.262463   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:45.262466   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:45.265931   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:45.266633   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:45.762462   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:45.762479   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:45.762488   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:45.762493   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:45.765789   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:46.262561   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:46.262580   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:46.262588   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:46.262593   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:46.265482   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:46.762168   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:46.762191   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:46.762198   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:46.762203   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:46.765612   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:47.262815   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:47.262836   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.262843   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.262848   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.265976   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:47.761953   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:47.761981   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.761995   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.762000   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.765436   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:47.766105   23378 node_ready.go:49] node "ha-652395-m03" has status "Ready":"True"
	I0802 17:46:47.766127   23378 node_ready.go:38] duration metric: took 18.004460114s for node "ha-652395-m03" to be "Ready" ...
	I0802 17:46:47.766136   23378 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:46:47.766214   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:46:47.766226   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.766235   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.766243   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.774008   23378 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0802 17:46:47.781343   23378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.781426   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7bnn4
	I0802 17:46:47.781431   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.781439   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.781443   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.785589   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:47.786687   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:47.786707   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.786717   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.786723   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.789953   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:47.790718   23378 pod_ready.go:92] pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:47.790733   23378 pod_ready.go:81] duration metric: took 9.363791ms for pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.790742   23378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.790800   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gzmsx
	I0802 17:46:47.790811   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.790817   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.790824   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.793539   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:47.794362   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:47.794375   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.794382   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.794386   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.796542   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:47.797035   23378 pod_ready.go:92] pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:47.797053   23378 pod_ready.go:81] duration metric: took 6.304591ms for pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.797061   23378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.797109   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395
	I0802 17:46:47.797117   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.797123   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.797126   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.799477   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:47.800384   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:47.800398   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.800405   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.800409   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.805504   23378 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0802 17:46:47.806133   23378 pod_ready.go:92] pod "etcd-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:47.806153   23378 pod_ready.go:81] duration metric: took 9.084753ms for pod "etcd-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.806164   23378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.806225   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395-m02
	I0802 17:46:47.806236   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.806246   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.806257   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.809373   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:47.809899   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:47.809913   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.809920   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.809925   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.812199   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:47.812585   23378 pod_ready.go:92] pod "etcd-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:47.812600   23378 pod_ready.go:81] duration metric: took 6.429757ms for pod "etcd-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.812608   23378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.962984   23378 request.go:629] Waited for 150.32177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395-m03
	I0802 17:46:47.963058   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395-m03
	I0802 17:46:47.963066   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.963074   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.963079   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.966757   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:48.162525   23378 request.go:629] Waited for 194.948292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:48.162578   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:48.162583   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:48.162590   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:48.162594   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:48.166781   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:48.167780   23378 pod_ready.go:92] pod "etcd-ha-652395-m03" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:48.167804   23378 pod_ready.go:81] duration metric: took 355.188036ms for pod "etcd-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:48.167827   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:48.362846   23378 request.go:629] Waited for 194.928144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395
	I0802 17:46:48.362907   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395
	I0802 17:46:48.362912   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:48.362920   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:48.362927   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:48.366366   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:48.562612   23378 request.go:629] Waited for 195.371826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:48.562666   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:48.562671   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:48.562679   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:48.562685   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:48.565549   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:48.566597   23378 pod_ready.go:92] pod "kube-apiserver-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:48.566617   23378 pod_ready.go:81] duration metric: took 398.78187ms for pod "kube-apiserver-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:48.566626   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:48.762745   23378 request.go:629] Waited for 195.99138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m02
	I0802 17:46:48.762810   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m02
	I0802 17:46:48.762817   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:48.762827   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:48.762835   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:48.766560   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:48.962702   23378 request.go:629] Waited for 195.42677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:48.962762   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:48.962767   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:48.962775   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:48.962779   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:48.966389   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:48.966908   23378 pod_ready.go:92] pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:48.966926   23378 pod_ready.go:81] duration metric: took 400.293446ms for pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:48.966935   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:49.161948   23378 request.go:629] Waited for 194.945915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m03
	I0802 17:46:49.162042   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m03
	I0802 17:46:49.162052   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:49.162061   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:49.162068   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:49.165467   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:49.362582   23378 request.go:629] Waited for 196.423946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:49.362663   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:49.362668   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:49.362676   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:49.362684   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:49.366680   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:49.367279   23378 pod_ready.go:92] pod "kube-apiserver-ha-652395-m03" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:49.367302   23378 pod_ready.go:81] duration metric: took 400.357196ms for pod "kube-apiserver-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:49.367315   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:49.562217   23378 request.go:629] Waited for 194.831384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395
	I0802 17:46:49.562284   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395
	I0802 17:46:49.562289   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:49.562297   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:49.562301   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:49.565924   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:49.762426   23378 request.go:629] Waited for 195.094293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:49.762490   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:49.762495   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:49.762502   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:49.762505   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:49.769266   23378 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0802 17:46:49.769865   23378 pod_ready.go:92] pod "kube-controller-manager-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:49.769884   23378 pod_ready.go:81] duration metric: took 402.557554ms for pod "kube-controller-manager-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:49.769898   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:49.962495   23378 request.go:629] Waited for 192.522293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m02
	I0802 17:46:49.962561   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m02
	I0802 17:46:49.962569   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:49.962579   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:49.962584   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:49.966077   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.162234   23378 request.go:629] Waited for 195.342234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:50.162307   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:50.162314   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:50.162323   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:50.162330   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:50.165518   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.166128   23378 pod_ready.go:92] pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:50.166146   23378 pod_ready.go:81] duration metric: took 396.240391ms for pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:50.166159   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:50.362704   23378 request.go:629] Waited for 196.446774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m03
	I0802 17:46:50.362782   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m03
	I0802 17:46:50.362791   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:50.362807   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:50.362816   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:50.366509   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.562760   23378 request.go:629] Waited for 195.399695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:50.562816   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:50.562821   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:50.562829   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:50.562834   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:50.566397   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.566910   23378 pod_ready.go:92] pod "kube-controller-manager-ha-652395-m03" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:50.566932   23378 pod_ready.go:81] duration metric: took 400.763468ms for pod "kube-controller-manager-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:50.566944   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fgghw" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:50.762495   23378 request.go:629] Waited for 195.482433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgghw
	I0802 17:46:50.762598   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgghw
	I0802 17:46:50.762610   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:50.762621   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:50.762630   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:50.766123   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.962069   23378 request.go:629] Waited for 195.088254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:50.962144   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:50.962153   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:50.962162   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:50.962170   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:50.965779   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.966239   23378 pod_ready.go:92] pod "kube-proxy-fgghw" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:50.966258   23378 pod_ready.go:81] duration metric: took 399.306891ms for pod "kube-proxy-fgghw" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:50.966268   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l7npk" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:51.162760   23378 request.go:629] Waited for 196.427311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7npk
	I0802 17:46:51.162850   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7npk
	I0802 17:46:51.162861   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:51.162873   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:51.162884   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:51.166652   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:51.362632   23378 request.go:629] Waited for 195.360523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:51.362692   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:51.362699   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:51.362710   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:51.362716   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:51.365680   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:51.366225   23378 pod_ready.go:92] pod "kube-proxy-l7npk" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:51.366246   23378 pod_ready.go:81] duration metric: took 399.971201ms for pod "kube-proxy-l7npk" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:51.366258   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rtbb6" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:51.562327   23378 request.go:629] Waited for 195.965492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtbb6
	I0802 17:46:51.562388   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtbb6
	I0802 17:46:51.562394   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:51.562402   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:51.562408   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:51.565803   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:51.763012   23378 request.go:629] Waited for 196.414319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:51.763086   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:51.763094   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:51.763124   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:51.763146   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:51.766283   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:51.767151   23378 pod_ready.go:92] pod "kube-proxy-rtbb6" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:51.767170   23378 pod_ready.go:81] duration metric: took 400.904121ms for pod "kube-proxy-rtbb6" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:51.767181   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:51.962170   23378 request.go:629] Waited for 194.91655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395
	I0802 17:46:51.962246   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395
	I0802 17:46:51.962251   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:51.962260   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:51.962270   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:51.965454   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.162468   23378 request.go:629] Waited for 196.404825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:52.162522   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:52.162526   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.162533   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.162538   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.165929   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.166675   23378 pod_ready.go:92] pod "kube-scheduler-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:52.166701   23378 pod_ready.go:81] duration metric: took 399.510556ms for pod "kube-scheduler-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:52.166715   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:52.362724   23378 request.go:629] Waited for 195.93744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m02
	I0802 17:46:52.362806   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m02
	I0802 17:46:52.362814   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.362823   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.362831   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.366089   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.561990   23378 request.go:629] Waited for 195.080467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:52.562062   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:52.562088   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.562098   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.562106   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.565363   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.565956   23378 pod_ready.go:92] pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:52.565974   23378 pod_ready.go:81] duration metric: took 399.25227ms for pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:52.565986   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:52.762409   23378 request.go:629] Waited for 196.357205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m03
	I0802 17:46:52.762492   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m03
	I0802 17:46:52.762500   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.762510   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.762519   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.766379   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.962239   23378 request.go:629] Waited for 195.337218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:52.962309   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:52.962314   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.962321   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.962325   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.966257   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.967021   23378 pod_ready.go:92] pod "kube-scheduler-ha-652395-m03" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:52.967048   23378 pod_ready.go:81] duration metric: took 401.05345ms for pod "kube-scheduler-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:52.967062   23378 pod_ready.go:38] duration metric: took 5.200911248s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:46:52.967083   23378 api_server.go:52] waiting for apiserver process to appear ...
	I0802 17:46:52.967160   23378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:46:52.983092   23378 api_server.go:72] duration metric: took 23.53076578s to wait for apiserver process to appear ...
	I0802 17:46:52.983133   23378 api_server.go:88] waiting for apiserver healthz status ...
	I0802 17:46:52.983158   23378 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0802 17:46:52.988942   23378 api_server.go:279] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0802 17:46:52.989100   23378 round_trippers.go:463] GET https://192.168.39.210:8443/version
	I0802 17:46:52.989130   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.989143   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.989150   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.990057   23378 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0802 17:46:52.990134   23378 api_server.go:141] control plane version: v1.30.3
	I0802 17:46:52.990165   23378 api_server.go:131] duration metric: took 7.024465ms to wait for apiserver health ...
	I0802 17:46:52.990175   23378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 17:46:53.162370   23378 request.go:629] Waited for 172.120514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:46:53.162440   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:46:53.162447   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:53.162457   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:53.162470   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:53.168986   23378 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0802 17:46:53.175225   23378 system_pods.go:59] 24 kube-system pods found
	I0802 17:46:53.175252   23378 system_pods.go:61] "coredns-7db6d8ff4d-7bnn4" [b4eedd91-fcf6-4cef-81b0-d043c38cc00c] Running
	I0802 17:46:53.175256   23378 system_pods.go:61] "coredns-7db6d8ff4d-gzmsx" [f5baa21b-dddf-43b6-a5a2-2b8f8e452a83] Running
	I0802 17:46:53.175260   23378 system_pods.go:61] "etcd-ha-652395" [221bc5ed-c9a4-41ee-8294-965ad8f9165a] Running
	I0802 17:46:53.175265   23378 system_pods.go:61] "etcd-ha-652395-m02" [92e40550-4a35-4769-a0a7-6a6d5c192af8] Running
	I0802 17:46:53.175269   23378 system_pods.go:61] "etcd-ha-652395-m03" [55847ea3-fcfb-45c1-84ed-1c59f0103a8e] Running
	I0802 17:46:53.175272   23378 system_pods.go:61] "kindnet-7n2wh" [33a684f1-19a3-472e-ba29-d1fae4edab93] Running
	I0802 17:46:53.175274   23378 system_pods.go:61] "kindnet-bjrkb" [04d82e24-8aa1-4c71-b904-03b53de10142] Running
	I0802 17:46:53.175279   23378 system_pods.go:61] "kindnet-qw2hm" [a2caca18-72b5-4bf1-8e8f-da4f91ff543e] Running
	I0802 17:46:53.175284   23378 system_pods.go:61] "kube-apiserver-ha-652395" [d004ddbd-7ea1-4702-ac84-3681621c7a13] Running
	I0802 17:46:53.175289   23378 system_pods.go:61] "kube-apiserver-ha-652395-m02" [a1dc5d2f-2a1c-4853-a83e-05f665ee4f00] Running
	I0802 17:46:53.175293   23378 system_pods.go:61] "kube-apiserver-ha-652395-m03" [168a8066-6efe-459d-ae4e-7127c490a688] Running
	I0802 17:46:53.175298   23378 system_pods.go:61] "kube-controller-manager-ha-652395" [e2ecf3df-c8af-4407-84a4-bfd052a3f5aa] Running
	I0802 17:46:53.175306   23378 system_pods.go:61] "kube-controller-manager-ha-652395-m02" [f2761a4e-d3dd-434f-b717-094d0b53d1cb] Running
	I0802 17:46:53.175311   23378 system_pods.go:61] "kube-controller-manager-ha-652395-m03" [40ecf9df-0961-4ade-8f00-ba8915370106] Running
	I0802 17:46:53.175319   23378 system_pods.go:61] "kube-proxy-fgghw" [8a72fb78-19f9-499b-943b-fd95b0da2994] Running
	I0802 17:46:53.175324   23378 system_pods.go:61] "kube-proxy-l7npk" [8db2cf39-da2a-42f7-8f34-6cd8f61d0b08] Running
	I0802 17:46:53.175331   23378 system_pods.go:61] "kube-proxy-rtbb6" [4e5ce587-0e3a-4cae-9358-66ceaaf05f58] Running
	I0802 17:46:53.175336   23378 system_pods.go:61] "kube-scheduler-ha-652395" [6dec3f93-8fa3-4045-8e81-deec2cc26ae6] Running
	I0802 17:46:53.175342   23378 system_pods.go:61] "kube-scheduler-ha-652395-m02" [dd4ed827-ccf7-4f23-8a1d-0823cde7e577] Running
	I0802 17:46:53.175345   23378 system_pods.go:61] "kube-scheduler-ha-652395-m03" [bb4d3dc8-ddcc-487a-bc81-4ee5d6c33a54] Running
	I0802 17:46:53.175349   23378 system_pods.go:61] "kube-vip-ha-652395" [1ee810a9-9d93-4cff-a5bb-60bab005eb5c] Running
	I0802 17:46:53.175353   23378 system_pods.go:61] "kube-vip-ha-652395-m02" [e16bf714-b09a-490d-80ad-73f7a4b71c27] Running
	I0802 17:46:53.175358   23378 system_pods.go:61] "kube-vip-ha-652395-m03" [b041dfe9-0d53-429d-9b41-4e80d032c691] Running
	I0802 17:46:53.175363   23378 system_pods.go:61] "storage-provisioner" [149760da-f585-48bf-9cc8-63ff848cf3c8] Running
	I0802 17:46:53.175371   23378 system_pods.go:74] duration metric: took 185.190304ms to wait for pod list to return data ...
	I0802 17:46:53.175379   23378 default_sa.go:34] waiting for default service account to be created ...
	I0802 17:46:53.362278   23378 request.go:629] Waited for 186.808969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/default/serviceaccounts
	I0802 17:46:53.362334   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/default/serviceaccounts
	I0802 17:46:53.362338   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:53.362345   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:53.362350   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:53.365707   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:53.365846   23378 default_sa.go:45] found service account: "default"
	I0802 17:46:53.365865   23378 default_sa.go:55] duration metric: took 190.475476ms for default service account to be created ...
	I0802 17:46:53.365874   23378 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 17:46:53.562188   23378 request.go:629] Waited for 196.237037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:46:53.562289   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:46:53.562300   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:53.562324   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:53.562336   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:53.568799   23378 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0802 17:46:53.575257   23378 system_pods.go:86] 24 kube-system pods found
	I0802 17:46:53.575282   23378 system_pods.go:89] "coredns-7db6d8ff4d-7bnn4" [b4eedd91-fcf6-4cef-81b0-d043c38cc00c] Running
	I0802 17:46:53.575288   23378 system_pods.go:89] "coredns-7db6d8ff4d-gzmsx" [f5baa21b-dddf-43b6-a5a2-2b8f8e452a83] Running
	I0802 17:46:53.575293   23378 system_pods.go:89] "etcd-ha-652395" [221bc5ed-c9a4-41ee-8294-965ad8f9165a] Running
	I0802 17:46:53.575297   23378 system_pods.go:89] "etcd-ha-652395-m02" [92e40550-4a35-4769-a0a7-6a6d5c192af8] Running
	I0802 17:46:53.575301   23378 system_pods.go:89] "etcd-ha-652395-m03" [55847ea3-fcfb-45c1-84ed-1c59f0103a8e] Running
	I0802 17:46:53.575305   23378 system_pods.go:89] "kindnet-7n2wh" [33a684f1-19a3-472e-ba29-d1fae4edab93] Running
	I0802 17:46:53.575308   23378 system_pods.go:89] "kindnet-bjrkb" [04d82e24-8aa1-4c71-b904-03b53de10142] Running
	I0802 17:46:53.575312   23378 system_pods.go:89] "kindnet-qw2hm" [a2caca18-72b5-4bf1-8e8f-da4f91ff543e] Running
	I0802 17:46:53.575320   23378 system_pods.go:89] "kube-apiserver-ha-652395" [d004ddbd-7ea1-4702-ac84-3681621c7a13] Running
	I0802 17:46:53.575325   23378 system_pods.go:89] "kube-apiserver-ha-652395-m02" [a1dc5d2f-2a1c-4853-a83e-05f665ee4f00] Running
	I0802 17:46:53.575331   23378 system_pods.go:89] "kube-apiserver-ha-652395-m03" [168a8066-6efe-459d-ae4e-7127c490a688] Running
	I0802 17:46:53.575336   23378 system_pods.go:89] "kube-controller-manager-ha-652395" [e2ecf3df-c8af-4407-84a4-bfd052a3f5aa] Running
	I0802 17:46:53.575343   23378 system_pods.go:89] "kube-controller-manager-ha-652395-m02" [f2761a4e-d3dd-434f-b717-094d0b53d1cb] Running
	I0802 17:46:53.575347   23378 system_pods.go:89] "kube-controller-manager-ha-652395-m03" [40ecf9df-0961-4ade-8f00-ba8915370106] Running
	I0802 17:46:53.575354   23378 system_pods.go:89] "kube-proxy-fgghw" [8a72fb78-19f9-499b-943b-fd95b0da2994] Running
	I0802 17:46:53.575358   23378 system_pods.go:89] "kube-proxy-l7npk" [8db2cf39-da2a-42f7-8f34-6cd8f61d0b08] Running
	I0802 17:46:53.575364   23378 system_pods.go:89] "kube-proxy-rtbb6" [4e5ce587-0e3a-4cae-9358-66ceaaf05f58] Running
	I0802 17:46:53.575368   23378 system_pods.go:89] "kube-scheduler-ha-652395" [6dec3f93-8fa3-4045-8e81-deec2cc26ae6] Running
	I0802 17:46:53.575375   23378 system_pods.go:89] "kube-scheduler-ha-652395-m02" [dd4ed827-ccf7-4f23-8a1d-0823cde7e577] Running
	I0802 17:46:53.575379   23378 system_pods.go:89] "kube-scheduler-ha-652395-m03" [bb4d3dc8-ddcc-487a-bc81-4ee5d6c33a54] Running
	I0802 17:46:53.575385   23378 system_pods.go:89] "kube-vip-ha-652395" [1ee810a9-9d93-4cff-a5bb-60bab005eb5c] Running
	I0802 17:46:53.575389   23378 system_pods.go:89] "kube-vip-ha-652395-m02" [e16bf714-b09a-490d-80ad-73f7a4b71c27] Running
	I0802 17:46:53.575394   23378 system_pods.go:89] "kube-vip-ha-652395-m03" [b041dfe9-0d53-429d-9b41-4e80d032c691] Running
	I0802 17:46:53.575398   23378 system_pods.go:89] "storage-provisioner" [149760da-f585-48bf-9cc8-63ff848cf3c8] Running
	I0802 17:46:53.575405   23378 system_pods.go:126] duration metric: took 209.523014ms to wait for k8s-apps to be running ...
	I0802 17:46:53.575412   23378 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 17:46:53.575457   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:46:53.590689   23378 system_svc.go:56] duration metric: took 15.269351ms WaitForService to wait for kubelet
	I0802 17:46:53.590714   23378 kubeadm.go:582] duration metric: took 24.138389815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:46:53.590734   23378 node_conditions.go:102] verifying NodePressure condition ...
	I0802 17:46:53.762036   23378 request.go:629] Waited for 171.237519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes
	I0802 17:46:53.762120   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes
	I0802 17:46:53.762127   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:53.762137   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:53.762146   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:53.765799   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:53.766763   23378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:46:53.766783   23378 node_conditions.go:123] node cpu capacity is 2
	I0802 17:46:53.766794   23378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:46:53.766797   23378 node_conditions.go:123] node cpu capacity is 2
	I0802 17:46:53.766801   23378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:46:53.766804   23378 node_conditions.go:123] node cpu capacity is 2
	I0802 17:46:53.766808   23378 node_conditions.go:105] duration metric: took 176.069555ms to run NodePressure ...
	I0802 17:46:53.766819   23378 start.go:241] waiting for startup goroutines ...
	I0802 17:46:53.766843   23378 start.go:255] writing updated cluster config ...
	I0802 17:46:53.767126   23378 ssh_runner.go:195] Run: rm -f paused
	I0802 17:46:53.817853   23378 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 17:46:53.819753   23378 out.go:177] * Done! kubectl is now configured to use "ha-652395" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.331093095Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-wwdvm,Uid:8d2d25e8-37d0-45c4-9b5a-9722d329d86f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722620815018916407,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T17:46:54.703250455Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c8b3d0b4534ff372a72475d9ae352350cc62b5ed3d449782921ad0e6924d428,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:149760da-f585-48bf-9cc8-63ff848cf3c8,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1722620672975939117,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-02T17:44:32.654012008Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7bnn4,Uid:b4eedd91-fcf6-4cef-81b0-d043c38cc00c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722620672974773664,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T17:44:32.656874487Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-gzmsx,Uid:f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1722620672957972037,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T17:44:32.647078297Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&PodSandboxMetadata{Name:kube-proxy-l7npk,Uid:8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722620657535357923,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-08-02T17:44:17.198773664Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&PodSandboxMetadata{Name:kindnet-bjrkb,Uid:04d82e24-8aa1-4c71-b904-03b53de10142,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722620657496620240,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T17:44:17.175592902Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f03523628a5ef263342e0ea8a644190931032104a376e1905ddccec32e34d31,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-652395,Uid:b35503df9ee27b31247351a3b8b83f9c,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1722620638464582724,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b35503df9ee27b31247351a3b8b83f9c,kubernetes.io/config.seen: 2024-08-02T17:43:57.959657502Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-652395,Uid:d3c9c044aaa51f57cf98fff08c0c405f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722620638451903554,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405
f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d3c9c044aaa51f57cf98fff08c0c405f,kubernetes.io/config.seen: 2024-08-02T17:43:57.959658484Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b08c45a675b532dd7c8302a227735b183109fbee139b54920b94fbdf65735968,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-652395,Uid:e8445990b47d8cfa9cb5c64d20f86596,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722620638437298149,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.210:8443,kubernetes.io/config.hash: e8445990b47d8cfa9cb5c64d20f86596,kubernetes.io/config.seen: 2024-08-02T17:43:57.959656407Z,kubernetes.io/config.source: file,},RuntimeHandler:,
},&PodSandbox{Id:f70dac73be7d9e0915854ddb5ed3d965ff13dca4abf762f0e090bc26f2546200,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-652395,Uid:90fe20ba3a1314e53eb4a1b834adcbbf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722620638425908091,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe20ba3a1314e53eb4a1b834adcbbf,},Annotations:map[string]string{kubernetes.io/config.hash: 90fe20ba3a1314e53eb4a1b834adcbbf,kubernetes.io/config.seen: 2024-08-02T17:43:57.959659160Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&PodSandboxMetadata{Name:etcd-ha-652395,Uid:fe06cf29caa5fbee7270b029a9ae89d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722620638415144640,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-652395,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.210:2379,kubernetes.io/config.hash: fe06cf29caa5fbee7270b029a9ae89d7,kubernetes.io/config.seen: 2024-08-02T17:43:57.959652030Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ab789247-3363-40a6-9e59-99410c85e414 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.331725529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d065e17f-935b-462f-84b7-0a7e7d151534 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.331789534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d065e17f-935b-462f-84b7-0a7e7d151534 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.332006568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722620817831244072,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b353e683c45c418ba90bd8365315f70f4345b261ea75807fb0e25ace0ada37a,PodSandboxId:3c8b3d0b4534ff372a72475d9ae352350cc62b5ed3d449782921ad0e6924d428,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722620673221822366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673204543175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673177108050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-ddd
f-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CON
TAINER_RUNNING,CreatedAt:1722620661418638012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722620657
834179826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144aba25daef80ccf20ca69cdc8dd550073e91644ac9e89eb7319a4d55e2a90,PodSandboxId:f70dac73be7d9e0915854ddb5ed3d965ff13dca4abf762f0e090bc26f2546200,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172262064174
9083607,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe20ba3a1314e53eb4a1b834adcbbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158d622aed9a79cabdd29acb1449354000a5500e94b4ce4bb805d4b919f49439,PodSandboxId:b08c45a675b532dd7c8302a227735b183109fbee139b54920b94fbdf65735968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722620638737786218,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6,PodSandboxId:2f03523628a5ef263342e0ea8a644190931032104a376e1905ddccec32e34d31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722620638687123093,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722620638641647480,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722620638599194093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d065e17f-935b-462f-84b7-0a7e7d151534 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.362775559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ba5bd3b-cab2-498e-98cd-cb82fe3a0c1f name=/runtime.v1.RuntimeService/Version
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.362988852Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ba5bd3b-cab2-498e-98cd-cb82fe3a0c1f name=/runtime.v1.RuntimeService/Version
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.364184248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56bc844f-4b35-4ec7-ad92-caeffe88326e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.364972147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621030364941939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56bc844f-4b35-4ec7-ad92-caeffe88326e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.365630707Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00e94cc2-92ea-4fe0-88ec-7bd267c7c7e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.365695501Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00e94cc2-92ea-4fe0-88ec-7bd267c7c7e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.366111844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722620817831244072,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b353e683c45c418ba90bd8365315f70f4345b261ea75807fb0e25ace0ada37a,PodSandboxId:3c8b3d0b4534ff372a72475d9ae352350cc62b5ed3d449782921ad0e6924d428,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722620673221822366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673204543175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673177108050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-ddd
f-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CON
TAINER_RUNNING,CreatedAt:1722620661418638012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722620657
834179826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144aba25daef80ccf20ca69cdc8dd550073e91644ac9e89eb7319a4d55e2a90,PodSandboxId:f70dac73be7d9e0915854ddb5ed3d965ff13dca4abf762f0e090bc26f2546200,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172262064174
9083607,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe20ba3a1314e53eb4a1b834adcbbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158d622aed9a79cabdd29acb1449354000a5500e94b4ce4bb805d4b919f49439,PodSandboxId:b08c45a675b532dd7c8302a227735b183109fbee139b54920b94fbdf65735968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722620638737786218,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6,PodSandboxId:2f03523628a5ef263342e0ea8a644190931032104a376e1905ddccec32e34d31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722620638687123093,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722620638641647480,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722620638599194093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00e94cc2-92ea-4fe0-88ec-7bd267c7c7e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.403380515Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4195510-3c00-478a-a1b8-51bef1fa9287 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.403491878Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4195510-3c00-478a-a1b8-51bef1fa9287 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.405333554Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=579d76ba-5865-49a8-8806-1a5dfb712a79 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.407787623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621030406286069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=579d76ba-5865-49a8-8806-1a5dfb712a79 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.410289540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b4b4a43-c913-471f-ad8f-3af295ab70f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.410396699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b4b4a43-c913-471f-ad8f-3af295ab70f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.410923990Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722620817831244072,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b353e683c45c418ba90bd8365315f70f4345b261ea75807fb0e25ace0ada37a,PodSandboxId:3c8b3d0b4534ff372a72475d9ae352350cc62b5ed3d449782921ad0e6924d428,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722620673221822366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673204543175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673177108050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-ddd
f-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CON
TAINER_RUNNING,CreatedAt:1722620661418638012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722620657
834179826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144aba25daef80ccf20ca69cdc8dd550073e91644ac9e89eb7319a4d55e2a90,PodSandboxId:f70dac73be7d9e0915854ddb5ed3d965ff13dca4abf762f0e090bc26f2546200,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172262064174
9083607,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe20ba3a1314e53eb4a1b834adcbbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158d622aed9a79cabdd29acb1449354000a5500e94b4ce4bb805d4b919f49439,PodSandboxId:b08c45a675b532dd7c8302a227735b183109fbee139b54920b94fbdf65735968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722620638737786218,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6,PodSandboxId:2f03523628a5ef263342e0ea8a644190931032104a376e1905ddccec32e34d31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722620638687123093,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722620638641647480,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722620638599194093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b4b4a43-c913-471f-ad8f-3af295ab70f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.451385693Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2285423c-b23f-40ba-8cce-bf1c487aa988 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.451511312Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2285423c-b23f-40ba-8cce-bf1c487aa988 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.452792754Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f74413b5-78f2-46ab-b1b5-1cb833c526e6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.453224768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621030453202060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f74413b5-78f2-46ab-b1b5-1cb833c526e6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.453774652Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=976b398e-2d71-4f82-a2da-4ad910b9eae5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.453864518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=976b398e-2d71-4f82-a2da-4ad910b9eae5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:50:30 ha-652395 crio[673]: time="2024-08-02 17:50:30.454085143Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722620817831244072,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b353e683c45c418ba90bd8365315f70f4345b261ea75807fb0e25ace0ada37a,PodSandboxId:3c8b3d0b4534ff372a72475d9ae352350cc62b5ed3d449782921ad0e6924d428,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722620673221822366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673204543175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673177108050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-ddd
f-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CON
TAINER_RUNNING,CreatedAt:1722620661418638012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722620657
834179826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144aba25daef80ccf20ca69cdc8dd550073e91644ac9e89eb7319a4d55e2a90,PodSandboxId:f70dac73be7d9e0915854ddb5ed3d965ff13dca4abf762f0e090bc26f2546200,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172262064174
9083607,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe20ba3a1314e53eb4a1b834adcbbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158d622aed9a79cabdd29acb1449354000a5500e94b4ce4bb805d4b919f49439,PodSandboxId:b08c45a675b532dd7c8302a227735b183109fbee139b54920b94fbdf65735968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722620638737786218,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6,PodSandboxId:2f03523628a5ef263342e0ea8a644190931032104a376e1905ddccec32e34d31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722620638687123093,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722620638641647480,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722620638599194093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=976b398e-2d71-4f82-a2da-4ad910b9eae5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8fd869ff4b02d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e8db151d94a97       busybox-fc5497c4f-wwdvm
	0b353e683c45c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   3c8b3d0b4534f       storage-provisioner
	c360a48ed21dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   236df4e4d374d       coredns-7db6d8ff4d-7bnn4
	122af758e0175       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   7a85af5981798       coredns-7db6d8ff4d-gzmsx
	e5737b2ef0345       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   93bf8df122de4       kindnet-bjrkb
	dbaf687f1fee9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   aa85cd011b109       kube-proxy-l7npk
	6144aba25daef       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   f70dac73be7d9       kube-vip-ha-652395
	158d622aed9a7       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   b08c45a675b53       kube-apiserver-ha-652395
	a3c95a2e3488e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   2f03523628a5e       kube-controller-manager-ha-652395
	c587c6ce09941       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   d14257a1927ee       kube-scheduler-ha-652395
	fae5bea03ccdc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   540d9595b8d86       etcd-ha-652395
	
	
	==> coredns [122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1] <==
	[INFO] 10.244.2.2:60449 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154708s
	[INFO] 10.244.2.2:59061 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076226s
	[INFO] 10.244.2.2:55056 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153757s
	[INFO] 10.244.2.2:54378 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059203s
	[INFO] 10.244.0.4:54290 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133679s
	[INFO] 10.244.0.4:45555 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706989s
	[INFO] 10.244.0.4:53404 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111941s
	[INFO] 10.244.0.4:37483 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045512s
	[INFO] 10.244.1.2:49967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130599s
	[INFO] 10.244.1.2:57007 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090105s
	[INFO] 10.244.1.2:43820 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110127s
	[INFO] 10.244.2.2:36224 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096715s
	[INFO] 10.244.2.2:60973 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081675s
	[INFO] 10.244.0.4:40476 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189503s
	[INFO] 10.244.0.4:56165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046005s
	[INFO] 10.244.0.4:44437 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000034828s
	[INFO] 10.244.0.4:35238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000032154s
	[INFO] 10.244.1.2:56315 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166841s
	[INFO] 10.244.1.2:47239 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198329s
	[INFO] 10.244.1.2:57096 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123709s
	[INFO] 10.244.2.2:46134 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000490913s
	[INFO] 10.244.2.2:53250 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148459s
	[INFO] 10.244.0.4:56093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118253s
	[INFO] 10.244.0.4:34180 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008075s
	[INFO] 10.244.0.4:45410 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005242s
	
	
	==> coredns [c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e] <==
	[INFO] 10.244.1.2:54559 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.003157766s
	[INFO] 10.244.1.2:59747 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001806815s
	[INFO] 10.244.2.2:50295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166931s
	[INFO] 10.244.2.2:41315 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000159243s
	[INFO] 10.244.2.2:36008 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00010659s
	[INFO] 10.244.2.2:60572 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001796945s
	[INFO] 10.244.0.4:60264 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128512s
	[INFO] 10.244.0.4:53377 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000106189s
	[INFO] 10.244.0.4:40974 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000119601s
	[INFO] 10.244.1.2:34952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129505s
	[INFO] 10.244.1.2:58425 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00370685s
	[INFO] 10.244.1.2:57393 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172166s
	[INFO] 10.244.2.2:37875 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001360319s
	[INFO] 10.244.2.2:40319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119258s
	[INFO] 10.244.0.4:41301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086881s
	[INFO] 10.244.0.4:48861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00176135s
	[INFO] 10.244.0.4:55078 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129582s
	[INFO] 10.244.0.4:37426 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138717s
	[INFO] 10.244.1.2:36979 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118362s
	[INFO] 10.244.2.2:57363 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012985s
	[INFO] 10.244.2.2:39508 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130428s
	[INFO] 10.244.1.2:35447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118938s
	[INFO] 10.244.2.2:32993 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168916s
	[INFO] 10.244.2.2:41103 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214849s
	[INFO] 10.244.0.4:36090 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133411s
	
	
	==> describe nodes <==
	Name:               ha-652395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T17_44_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:44:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:50:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:47:09 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:47:09 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:47:09 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:47:09 +0000   Fri, 02 Aug 2024 17:44:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-652395
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ba599bf07ef4e41ba86086b6ac2ff1a
	  System UUID:                5ba599bf-07ef-4e41-ba86-086b6ac2ff1a
	  Boot ID:                    ed33b037-d8f7-4cbf-a057-27f14a3cc7dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwdvm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 coredns-7db6d8ff4d-7bnn4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m13s
	  kube-system                 coredns-7db6d8ff4d-gzmsx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m13s
	  kube-system                 etcd-ha-652395                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m26s
	  kube-system                 kindnet-bjrkb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m13s
	  kube-system                 kube-apiserver-ha-652395             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-controller-manager-ha-652395    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-proxy-l7npk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-scheduler-ha-652395             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-vip-ha-652395                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m12s                  kube-proxy       
	  Normal  Starting                 6m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m32s (x7 over 6m32s)  kubelet          Node ha-652395 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m32s (x8 over 6m32s)  kubelet          Node ha-652395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s (x8 over 6m32s)  kubelet          Node ha-652395 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m26s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m26s                  kubelet          Node ha-652395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s                  kubelet          Node ha-652395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s                  kubelet          Node ha-652395 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal  NodeReady                5m58s                  kubelet          Node ha-652395 status is now: NodeReady
	  Normal  RegisteredNode           5m2s                   node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	
	
	Name:               ha-652395-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T17_45_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:45:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:48:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 02 Aug 2024 17:47:13 +0000   Fri, 02 Aug 2024 17:48:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 02 Aug 2024 17:47:13 +0000   Fri, 02 Aug 2024 17:48:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 02 Aug 2024 17:47:13 +0000   Fri, 02 Aug 2024 17:48:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 02 Aug 2024 17:47:13 +0000   Fri, 02 Aug 2024 17:48:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-652395-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4562f021ca54cf29302ae6053b176ca
	  System UUID:                b4562f02-1ca5-4cf2-9302-ae6053b176ca
	  Boot ID:                    e7c511aa-dc1e-4298-ac46-9d614ab780c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4gkm6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-ha-652395-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m18s
	  kube-system                 kindnet-7n2wh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m20s
	  kube-system                 kube-apiserver-ha-652395-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-controller-manager-ha-652395-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-proxy-rtbb6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-scheduler-ha-652395-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-vip-ha-652395-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node ha-652395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node ha-652395-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m20s)  kubelet          Node ha-652395-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           5m2s                   node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           3m46s                  node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-652395-m02 status is now: NodeNotReady
	
	
	Name:               ha-652395-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T17_46_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:46:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:50:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:47:27 +0000   Fri, 02 Aug 2024 17:46:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:47:27 +0000   Fri, 02 Aug 2024 17:46:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:47:27 +0000   Fri, 02 Aug 2024 17:46:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:47:27 +0000   Fri, 02 Aug 2024 17:46:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-652395-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98b40f3acdab4627b19b6017ea4f9a53
	  System UUID:                98b40f3a-cdab-4627-b19b-6017ea4f9a53
	  Boot ID:                    5e9d8bb1-9650-48d7-bddb-5da6b47ffd9e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lwm5m                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-ha-652395-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m2s
	  kube-system                 kindnet-qw2hm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m4s
	  kube-system                 kube-apiserver-ha-652395-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-controller-manager-ha-652395-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-proxy-fgghw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-ha-652395-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-vip-ha-652395-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node ha-652395-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node ha-652395-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node ha-652395-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	  Normal  RegisteredNode           3m46s                node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	
	
	Name:               ha-652395-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T17_47_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:47:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:50:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:48:00 +0000   Fri, 02 Aug 2024 17:47:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:48:00 +0000   Fri, 02 Aug 2024 17:47:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:48:00 +0000   Fri, 02 Aug 2024 17:47:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:48:00 +0000   Fri, 02 Aug 2024 17:47:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-652395-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 998c02abf56b4784b82e5c48780cf7d3
	  System UUID:                998c02ab-f56b-4784-b82e-5c48780cf7d3
	  Boot ID:                    775309da-8648-4a2a-9433-2f07263d9659
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nksdg       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m
	  kube-system                 kube-proxy-d44zn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m55s              kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m1s)  kubelet          Node ha-652395-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m1s)  kubelet          Node ha-652395-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m1s)  kubelet          Node ha-652395-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s              node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal  RegisteredNode           2m57s              node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal  RegisteredNode           2m56s              node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal  NodeReady                2m40s              kubelet          Node ha-652395-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 2 17:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051087] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037656] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.691479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.739202] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.520223] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.851587] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.054661] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055410] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.166920] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.132294] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.235363] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.898825] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +3.781164] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.056602] kauditd_printk_skb: 158 callbacks suppressed
	[Aug 2 17:44] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.095134] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.851149] kauditd_printk_skb: 18 callbacks suppressed
	[ +21.579996] kauditd_printk_skb: 38 callbacks suppressed
	[Aug 2 17:45] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1] <==
	{"level":"warn","ts":"2024-08-02T17:50:30.741247Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.750763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.758068Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.761347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.764492Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.771299Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.772301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.778027Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.785052Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.788379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.792028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.799937Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.806524Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.812545Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.815667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.818655Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.833678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.844802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.850486Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6e90f565a3251e9","rtt":"9.910821ms","error":"dial tcp 192.168.39.220:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-02T17:50:30.850574Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6e90f565a3251e9","rtt":"956.439µs","error":"dial tcp 192.168.39.220:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-02T17:50:30.853555Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.86878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.870518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.879759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:50:30.898494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:50:30 up 6 min,  0 users,  load average: 0.19, 0.25, 0.14
	Linux ha-652395 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1] <==
	I0802 17:49:52.520124       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:50:02.523770       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:50:02.523830       1 main.go:299] handling current node
	I0802 17:50:02.523850       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:50:02.523857       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:50:02.524053       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:50:02.524073       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:50:02.524129       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:50:02.524147       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:50:12.528475       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:50:12.528540       1 main.go:299] handling current node
	I0802 17:50:12.528566       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:50:12.528575       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:50:12.528738       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:50:12.528766       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:50:12.528909       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:50:12.528938       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:50:22.520049       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:50:22.520154       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:50:22.520353       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:50:22.520374       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:50:22.520517       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:50:22.520535       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:50:22.520604       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:50:22.520620       1 main.go:299] handling current node
	
	
	==> kube-apiserver [158d622aed9a79cabdd29acb1449354000a5500e94b4ce4bb805d4b919f49439] <==
	I0802 17:44:04.923350       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0802 17:44:04.938631       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0802 17:44:17.141121       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0802 17:44:17.219269       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0802 17:46:27.269704       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0802 17:46:27.270497       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0802 17:46:27.270506       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 218.432µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0802 17:46:27.272378       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0802 17:46:27.272560       1 timeout.go:142] post-timeout activity - time-elapsed: 3.01038ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0802 17:46:59.020907       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35408: use of closed network connection
	E0802 17:46:59.231600       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35434: use of closed network connection
	E0802 17:46:59.424159       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35456: use of closed network connection
	E0802 17:46:59.607761       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35472: use of closed network connection
	E0802 17:46:59.786874       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35486: use of closed network connection
	E0802 17:46:59.972651       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35508: use of closed network connection
	E0802 17:47:00.154198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35528: use of closed network connection
	E0802 17:47:00.320229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35544: use of closed network connection
	E0802 17:47:00.496603       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35558: use of closed network connection
	E0802 17:47:00.784699       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35582: use of closed network connection
	E0802 17:47:00.956287       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35598: use of closed network connection
	E0802 17:47:01.157122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35612: use of closed network connection
	E0802 17:47:01.325967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35626: use of closed network connection
	E0802 17:47:01.498415       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35656: use of closed network connection
	E0802 17:47:01.686970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35662: use of closed network connection
	W0802 17:48:22.810965       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.210 192.168.39.62]
	
	
	==> kube-controller-manager [a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6] <==
	I0802 17:46:26.434062       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-652395-m03" podCIDRs=["10.244.2.0/24"]
	I0802 17:46:31.448149       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-652395-m03"
	I0802 17:46:54.718102       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.249834ms"
	I0802 17:46:54.746796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.546663ms"
	I0802 17:46:54.977547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="230.684094ms"
	I0802 17:46:55.051103       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.251999ms"
	I0802 17:46:55.083704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.495025ms"
	I0802 17:46:55.083824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.339µs"
	I0802 17:46:55.213621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.597634ms"
	I0802 17:46:55.213905       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="177.702µs"
	I0802 17:46:58.071723       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.562837ms"
	I0802 17:46:58.071875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.601µs"
	I0802 17:46:58.583792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.502132ms"
	E0802 17:46:58.583906       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0802 17:46:58.584055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.111µs"
	I0802 17:46:58.589593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="216.816µs"
	E0802 17:47:29.961797       1 certificate_controller.go:146] Sync csr-lzxzq failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-lzxzq": the object has been modified; please apply your changes to the latest version and try again
	E0802 17:47:29.978174       1 certificate_controller.go:146] Sync csr-lzxzq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-lzxzq": the object has been modified; please apply your changes to the latest version and try again
	I0802 17:47:30.240126       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-652395-m04\" does not exist"
	I0802 17:47:30.269420       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-652395-m04" podCIDRs=["10.244.3.0/24"]
	I0802 17:47:31.461708       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-652395-m04"
	I0802 17:47:50.185291       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-652395-m04"
	I0802 17:48:44.488719       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-652395-m04"
	I0802 17:48:44.647582       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.085066ms"
	I0802 17:48:44.647843       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.457µs"
	
	
	==> kube-proxy [dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d] <==
	I0802 17:44:18.175971       1 server_linux.go:69] "Using iptables proxy"
	I0802 17:44:18.192513       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.210"]
	I0802 17:44:18.232306       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 17:44:18.232344       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 17:44:18.232359       1 server_linux.go:165] "Using iptables Proxier"
	I0802 17:44:18.235019       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 17:44:18.235619       1 server.go:872] "Version info" version="v1.30.3"
	I0802 17:44:18.235694       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:44:18.237419       1 config.go:192] "Starting service config controller"
	I0802 17:44:18.237875       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 17:44:18.237930       1 config.go:101] "Starting endpoint slice config controller"
	I0802 17:44:18.237978       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 17:44:18.239116       1 config.go:319] "Starting node config controller"
	I0802 17:44:18.239152       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 17:44:18.338607       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0802 17:44:18.338702       1 shared_informer.go:320] Caches are synced for service config
	I0802 17:44:18.339243       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca] <==
	E0802 17:46:26.483599       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fgghw\": pod kube-proxy-fgghw is already assigned to node \"ha-652395-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fgghw" node="ha-652395-m03"
	E0802 17:46:26.484154       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8a72fb78-19f9-499b-943b-fd95b0da2994(kube-system/kube-proxy-fgghw) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fgghw"
	E0802 17:46:26.484295       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fgghw\": pod kube-proxy-fgghw is already assigned to node \"ha-652395-m03\"" pod="kube-system/kube-proxy-fgghw"
	I0802 17:46:26.484397       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fgghw" node="ha-652395-m03"
	E0802 17:46:26.488889       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qw2hm\": pod kindnet-qw2hm is already assigned to node \"ha-652395-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qw2hm" node="ha-652395-m03"
	E0802 17:46:26.488934       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a2caca18-72b5-4bf1-8e8f-da4f91ff543e(kube-system/kindnet-qw2hm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qw2hm"
	E0802 17:46:26.488952       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qw2hm\": pod kindnet-qw2hm is already assigned to node \"ha-652395-m03\"" pod="kube-system/kindnet-qw2hm"
	I0802 17:46:26.488968       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qw2hm" node="ha-652395-m03"
	I0802 17:46:54.677737       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="dfefc05b-4ed9-4de9-b511-848735a02832" pod="default/busybox-fc5497c4f-4gkm6" assumedNode="ha-652395-m02" currentNode="ha-652395-m03"
	E0802 17:46:54.681524       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-4gkm6\": pod busybox-fc5497c4f-4gkm6 is already assigned to node \"ha-652395-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-4gkm6" node="ha-652395-m03"
	E0802 17:46:54.681606       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dfefc05b-4ed9-4de9-b511-848735a02832(default/busybox-fc5497c4f-4gkm6) was assumed on ha-652395-m03 but assigned to ha-652395-m02" pod="default/busybox-fc5497c4f-4gkm6"
	E0802 17:46:54.681629       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-4gkm6\": pod busybox-fc5497c4f-4gkm6 is already assigned to node \"ha-652395-m02\"" pod="default/busybox-fc5497c4f-4gkm6"
	I0802 17:46:54.681665       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-4gkm6" node="ha-652395-m02"
	E0802 17:46:54.718965       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wwdvm\": pod busybox-fc5497c4f-wwdvm is already assigned to node \"ha-652395\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wwdvm" node="ha-652395"
	E0802 17:46:54.719105       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8d2d25e8-37d0-45c4-9b5a-9722d329d86f(default/busybox-fc5497c4f-wwdvm) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wwdvm"
	E0802 17:46:54.719159       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wwdvm\": pod busybox-fc5497c4f-wwdvm is already assigned to node \"ha-652395\"" pod="default/busybox-fc5497c4f-wwdvm"
	I0802 17:46:54.719234       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wwdvm" node="ha-652395"
	E0802 17:46:54.719601       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lwm5m\": pod busybox-fc5497c4f-lwm5m is already assigned to node \"ha-652395-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-lwm5m" node="ha-652395-m03"
	E0802 17:46:54.719665       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6389e9d8-4530-492e-8bc6-7bc9a6516f41(default/busybox-fc5497c4f-lwm5m) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-lwm5m"
	E0802 17:46:54.719697       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lwm5m\": pod busybox-fc5497c4f-lwm5m is already assigned to node \"ha-652395-m03\"" pod="default/busybox-fc5497c4f-lwm5m"
	I0802 17:46:54.719759       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-lwm5m" node="ha-652395-m03"
	E0802 17:47:30.336011       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-d44zn\": pod kube-proxy-d44zn is already assigned to node \"ha-652395-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-d44zn" node="ha-652395-m04"
	E0802 17:47:30.336363       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d24eb3a9-0a5f-4f16-92f9-51cb43af681a(kube-system/kube-proxy-d44zn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-d44zn"
	E0802 17:47:30.336595       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-d44zn\": pod kube-proxy-d44zn is already assigned to node \"ha-652395-m04\"" pod="kube-system/kube-proxy-d44zn"
	I0802 17:47:30.336681       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-d44zn" node="ha-652395-m04"
	
	
	==> kubelet <==
	Aug 02 17:46:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:46:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:46:54 ha-652395 kubelet[1358]: I0802 17:46:54.703650    1358 topology_manager.go:215] "Topology Admit Handler" podUID="8d2d25e8-37d0-45c4-9b5a-9722d329d86f" podNamespace="default" podName="busybox-fc5497c4f-wwdvm"
	Aug 02 17:46:54 ha-652395 kubelet[1358]: I0802 17:46:54.730296    1358 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj2fb\" (UniqueName: \"kubernetes.io/projected/8d2d25e8-37d0-45c4-9b5a-9722d329d86f-kube-api-access-hj2fb\") pod \"busybox-fc5497c4f-wwdvm\" (UID: \"8d2d25e8-37d0-45c4-9b5a-9722d329d86f\") " pod="default/busybox-fc5497c4f-wwdvm"
	Aug 02 17:47:01 ha-652395 kubelet[1358]: E0802 17:47:01.499157    1358 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48918->127.0.0.1:35273: write tcp 127.0.0.1:48918->127.0.0.1:35273: write: broken pipe
	Aug 02 17:47:04 ha-652395 kubelet[1358]: E0802 17:47:04.856413    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:47:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:47:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:47:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:47:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:48:04 ha-652395 kubelet[1358]: E0802 17:48:04.857008    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:48:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:48:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:48:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:48:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:49:04 ha-652395 kubelet[1358]: E0802 17:49:04.856898    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:49:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:49:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:49:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:49:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:50:04 ha-652395 kubelet[1358]: E0802 17:50:04.862833    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:50:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:50:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:50:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:50:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-652395 -n ha-652395
helpers_test.go:261: (dbg) Run:  kubectl --context ha-652395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr: exit status 3 (3.19240153s)

                                                
                                                
-- stdout --
	ha-652395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-652395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:50:35.451676   28199 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:50:35.451905   28199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:50:35.451913   28199 out.go:304] Setting ErrFile to fd 2...
	I0802 17:50:35.451917   28199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:50:35.452069   28199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:50:35.452212   28199 out.go:298] Setting JSON to false
	I0802 17:50:35.452232   28199 mustload.go:65] Loading cluster: ha-652395
	I0802 17:50:35.452320   28199 notify.go:220] Checking for updates...
	I0802 17:50:35.452555   28199 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:50:35.452569   28199 status.go:255] checking status of ha-652395 ...
	I0802 17:50:35.452921   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:35.452980   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:35.472550   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39315
	I0802 17:50:35.473008   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:35.473584   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:35.473611   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:35.473892   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:35.474066   28199 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:50:35.475684   28199 status.go:330] ha-652395 host status = "Running" (err=<nil>)
	I0802 17:50:35.475696   28199 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:50:35.475953   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:35.475982   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:35.490530   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34275
	I0802 17:50:35.490960   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:35.491392   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:35.491422   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:35.491830   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:35.492031   28199 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:50:35.494524   28199 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:35.494993   28199 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:50:35.495019   28199 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:35.495182   28199 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:50:35.495495   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:35.495549   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:35.510604   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45347
	I0802 17:50:35.510967   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:35.511424   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:35.511448   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:35.511749   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:35.511928   28199 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:50:35.512118   28199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:35.512142   28199 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:50:35.515135   28199 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:35.515629   28199 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:50:35.515672   28199 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:35.515780   28199 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:50:35.515947   28199 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:50:35.516063   28199 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:50:35.516189   28199 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:50:35.598912   28199 ssh_runner.go:195] Run: systemctl --version
	I0802 17:50:35.604395   28199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:35.618527   28199 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:50:35.618553   28199 api_server.go:166] Checking apiserver status ...
	I0802 17:50:35.618591   28199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:50:35.631917   28199 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup
	W0802 17:50:35.640870   28199 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:50:35.640912   28199 ssh_runner.go:195] Run: ls
	I0802 17:50:35.644985   28199 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:50:35.649033   28199 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:50:35.649061   28199 status.go:422] ha-652395 apiserver status = Running (err=<nil>)
	I0802 17:50:35.649075   28199 status.go:257] ha-652395 status: &{Name:ha-652395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:50:35.649093   28199 status.go:255] checking status of ha-652395-m02 ...
	I0802 17:50:35.649393   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:35.649431   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:35.664459   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38343
	I0802 17:50:35.664902   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:35.665405   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:35.665423   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:35.665763   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:35.665938   28199 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 17:50:35.667466   28199 status.go:330] ha-652395-m02 host status = "Running" (err=<nil>)
	I0802 17:50:35.667486   28199 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:50:35.667807   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:35.667840   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:35.682448   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I0802 17:50:35.682892   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:35.683419   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:35.683443   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:35.683762   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:35.683929   28199 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:50:35.686696   28199 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:35.687129   28199 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:50:35.687158   28199 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:35.687303   28199 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:50:35.687669   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:35.687707   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:35.702423   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33395
	I0802 17:50:35.702794   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:35.703245   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:35.703267   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:35.703610   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:35.703788   28199 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:50:35.703952   28199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:35.703970   28199 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:50:35.706565   28199 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:35.706897   28199 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:50:35.706922   28199 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:35.707007   28199 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:50:35.707177   28199 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:50:35.707332   28199 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:50:35.707462   28199 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	W0802 17:50:38.267479   28199 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.220:22: connect: no route to host
	W0802 17:50:38.267574   28199 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	E0802 17:50:38.267597   28199 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:38.267605   28199 status.go:257] ha-652395-m02 status: &{Name:ha-652395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0802 17:50:38.267629   28199 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:38.267637   28199 status.go:255] checking status of ha-652395-m03 ...
	I0802 17:50:38.267925   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:38.267963   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:38.283664   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42037
	I0802 17:50:38.284088   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:38.284690   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:38.284717   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:38.285033   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:38.285260   28199 main.go:141] libmachine: (ha-652395-m03) Calling .GetState
	I0802 17:50:38.286611   28199 status.go:330] ha-652395-m03 host status = "Running" (err=<nil>)
	I0802 17:50:38.286624   28199 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:50:38.286923   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:38.286955   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:38.301757   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44239
	I0802 17:50:38.302157   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:38.302525   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:38.302558   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:38.302876   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:38.303041   28199 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:50:38.306092   28199 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:38.306544   28199 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:50:38.306568   28199 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:38.306735   28199 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:50:38.307034   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:38.307069   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:38.321137   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33727
	I0802 17:50:38.321540   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:38.321966   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:38.321987   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:38.322260   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:38.322442   28199 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:50:38.322618   28199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:38.322637   28199 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:50:38.325479   28199 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:38.325865   28199 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:50:38.325888   28199 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:38.326002   28199 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:50:38.326144   28199 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:50:38.326266   28199 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:50:38.326363   28199 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:50:38.407062   28199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:38.422229   28199 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:50:38.422258   28199 api_server.go:166] Checking apiserver status ...
	I0802 17:50:38.422292   28199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:50:38.436330   28199 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0802 17:50:38.445808   28199 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:50:38.445855   28199 ssh_runner.go:195] Run: ls
	I0802 17:50:38.449838   28199 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:50:38.454219   28199 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:50:38.454238   28199 status.go:422] ha-652395-m03 apiserver status = Running (err=<nil>)
	I0802 17:50:38.454246   28199 status.go:257] ha-652395-m03 status: &{Name:ha-652395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:50:38.454263   28199 status.go:255] checking status of ha-652395-m04 ...
	I0802 17:50:38.454591   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:38.454639   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:38.469948   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32891
	I0802 17:50:38.470302   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:38.470791   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:38.470815   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:38.471139   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:38.471366   28199 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 17:50:38.473087   28199 status.go:330] ha-652395-m04 host status = "Running" (err=<nil>)
	I0802 17:50:38.473099   28199 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:50:38.473406   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:38.473442   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:38.488303   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37773
	I0802 17:50:38.488744   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:38.489363   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:38.489387   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:38.489719   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:38.489890   28199 main.go:141] libmachine: (ha-652395-m04) Calling .GetIP
	I0802 17:50:38.492717   28199 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:38.493098   28199 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:50:38.493127   28199 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:38.493241   28199 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:50:38.493576   28199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:38.493615   28199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:38.508837   28199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45919
	I0802 17:50:38.509337   28199 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:38.509769   28199 main.go:141] libmachine: Using API Version  1
	I0802 17:50:38.509793   28199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:38.510122   28199 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:38.510327   28199 main.go:141] libmachine: (ha-652395-m04) Calling .DriverName
	I0802 17:50:38.510537   28199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:38.510561   28199 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHHostname
	I0802 17:50:38.512967   28199 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:38.513333   28199 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:50:38.513371   28199 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:38.513477   28199 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHPort
	I0802 17:50:38.513664   28199 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHKeyPath
	I0802 17:50:38.513816   28199 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHUsername
	I0802 17:50:38.513923   28199 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m04/id_rsa Username:docker}
	I0802 17:50:38.590261   28199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:38.603619   28199 status.go:257] ha-652395-m04 status: &{Name:ha-652395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr: exit status 3 (5.01013192s)

                                                
                                                
-- stdout --
	ha-652395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-652395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:50:40.099443   28299 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:50:40.099910   28299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:50:40.099928   28299 out.go:304] Setting ErrFile to fd 2...
	I0802 17:50:40.099937   28299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:50:40.100481   28299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:50:40.100815   28299 out.go:298] Setting JSON to false
	I0802 17:50:40.101014   28299 mustload.go:65] Loading cluster: ha-652395
	I0802 17:50:40.101019   28299 notify.go:220] Checking for updates...
	I0802 17:50:40.101436   28299 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:50:40.101453   28299 status.go:255] checking status of ha-652395 ...
	I0802 17:50:40.101814   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:40.101869   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:40.117565   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36889
	I0802 17:50:40.117899   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:40.118476   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:40.118499   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:40.118949   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:40.119164   28299 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:50:40.121007   28299 status.go:330] ha-652395 host status = "Running" (err=<nil>)
	I0802 17:50:40.121023   28299 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:50:40.121427   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:40.121475   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:40.136769   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43097
	I0802 17:50:40.137174   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:40.137600   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:40.137624   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:40.137967   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:40.138148   28299 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:50:40.140960   28299 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:40.141370   28299 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:50:40.141397   28299 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:40.141520   28299 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:50:40.141886   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:40.141941   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:40.157122   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0802 17:50:40.157593   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:40.158079   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:40.158103   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:40.158477   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:40.158640   28299 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:50:40.158822   28299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:40.158845   28299 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:50:40.161930   28299 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:40.162373   28299 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:50:40.162398   28299 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:40.162505   28299 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:50:40.162673   28299 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:50:40.162816   28299 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:50:40.162997   28299 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:50:40.250375   28299 ssh_runner.go:195] Run: systemctl --version
	I0802 17:50:40.256220   28299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:40.270234   28299 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:50:40.270264   28299 api_server.go:166] Checking apiserver status ...
	I0802 17:50:40.270312   28299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:50:40.284572   28299 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup
	W0802 17:50:40.293793   28299 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:50:40.293848   28299 ssh_runner.go:195] Run: ls
	I0802 17:50:40.298746   28299 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:50:40.302753   28299 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:50:40.302775   28299 status.go:422] ha-652395 apiserver status = Running (err=<nil>)
	I0802 17:50:40.302784   28299 status.go:257] ha-652395 status: &{Name:ha-652395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:50:40.302799   28299 status.go:255] checking status of ha-652395-m02 ...
	I0802 17:50:40.303110   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:40.303150   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:40.317972   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44115
	I0802 17:50:40.318369   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:40.318801   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:40.318820   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:40.319155   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:40.319352   28299 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 17:50:40.321057   28299 status.go:330] ha-652395-m02 host status = "Running" (err=<nil>)
	I0802 17:50:40.321076   28299 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:50:40.321429   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:40.321471   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:40.336973   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35179
	I0802 17:50:40.337402   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:40.337927   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:40.337946   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:40.338306   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:40.338520   28299 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:50:40.341168   28299 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:40.341579   28299 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:50:40.341614   28299 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:40.341807   28299 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:50:40.342123   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:40.342164   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:40.357047   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33085
	I0802 17:50:40.357417   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:40.357896   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:40.357923   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:40.358208   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:40.358367   28299 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:50:40.358543   28299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:40.358565   28299 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:50:40.360919   28299 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:40.361279   28299 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:50:40.361305   28299 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:40.361428   28299 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:50:40.361595   28299 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:50:40.361752   28299 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:50:40.361888   28299 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	W0802 17:50:41.339417   28299 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:41.339472   28299 retry.go:31] will retry after 338.855294ms: dial tcp 192.168.39.220:22: connect: no route to host
	W0802 17:50:44.731464   28299 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.220:22: connect: no route to host
	W0802 17:50:44.731580   28299 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	E0802 17:50:44.731605   28299 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:44.731616   28299 status.go:257] ha-652395-m02 status: &{Name:ha-652395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0802 17:50:44.731646   28299 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:44.731659   28299 status.go:255] checking status of ha-652395-m03 ...
	I0802 17:50:44.731982   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:44.732034   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:44.746830   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
	I0802 17:50:44.747282   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:44.747844   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:44.747900   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:44.748239   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:44.748476   28299 main.go:141] libmachine: (ha-652395-m03) Calling .GetState
	I0802 17:50:44.750198   28299 status.go:330] ha-652395-m03 host status = "Running" (err=<nil>)
	I0802 17:50:44.750232   28299 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:50:44.750754   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:44.750793   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:44.765028   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38931
	I0802 17:50:44.765382   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:44.765836   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:44.765869   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:44.766209   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:44.766503   28299 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:50:44.769241   28299 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:44.769718   28299 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:50:44.769740   28299 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:44.769927   28299 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:50:44.770224   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:44.770255   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:44.785108   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41423
	I0802 17:50:44.785539   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:44.785966   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:44.785994   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:44.786278   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:44.786500   28299 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:50:44.786682   28299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:44.786700   28299 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:50:44.789453   28299 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:44.789846   28299 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:50:44.789876   28299 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:44.790099   28299 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:50:44.790294   28299 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:50:44.790466   28299 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:50:44.790644   28299 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:50:44.870735   28299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:44.886585   28299 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:50:44.886617   28299 api_server.go:166] Checking apiserver status ...
	I0802 17:50:44.886649   28299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:50:44.900367   28299 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0802 17:50:44.909283   28299 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:50:44.909347   28299 ssh_runner.go:195] Run: ls
	I0802 17:50:44.913350   28299 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:50:44.917641   28299 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:50:44.917666   28299 status.go:422] ha-652395-m03 apiserver status = Running (err=<nil>)
	I0802 17:50:44.917675   28299 status.go:257] ha-652395-m03 status: &{Name:ha-652395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:50:44.917689   28299 status.go:255] checking status of ha-652395-m04 ...
	I0802 17:50:44.917975   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:44.918010   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:44.933019   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44987
	I0802 17:50:44.933429   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:44.933904   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:44.933921   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:44.934219   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:44.934406   28299 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 17:50:44.935930   28299 status.go:330] ha-652395-m04 host status = "Running" (err=<nil>)
	I0802 17:50:44.935949   28299 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:50:44.936429   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:44.936478   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:44.951033   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43613
	I0802 17:50:44.951437   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:44.951936   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:44.951958   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:44.952278   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:44.952478   28299 main.go:141] libmachine: (ha-652395-m04) Calling .GetIP
	I0802 17:50:44.955327   28299 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:44.955723   28299 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:50:44.955759   28299 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:44.955906   28299 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:50:44.956223   28299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:44.956261   28299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:44.971941   28299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41339
	I0802 17:50:44.972502   28299 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:44.973002   28299 main.go:141] libmachine: Using API Version  1
	I0802 17:50:44.973026   28299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:44.973321   28299 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:44.973510   28299 main.go:141] libmachine: (ha-652395-m04) Calling .DriverName
	I0802 17:50:44.973691   28299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:44.973717   28299 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHHostname
	I0802 17:50:44.976533   28299 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:44.976887   28299 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:50:44.976916   28299 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:44.977043   28299 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHPort
	I0802 17:50:44.977221   28299 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHKeyPath
	I0802 17:50:44.977380   28299 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHUsername
	I0802 17:50:44.977519   28299 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m04/id_rsa Username:docker}
	I0802 17:50:45.054182   28299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:45.067793   28299 status.go:257] ha-652395-m04 status: &{Name:ha-652395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr: exit status 3 (4.998714841s)

                                                
                                                
-- stdout --
	ha-652395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-652395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:50:46.607616   28401 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:50:46.607742   28401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:50:46.607752   28401 out.go:304] Setting ErrFile to fd 2...
	I0802 17:50:46.607757   28401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:50:46.607964   28401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:50:46.608152   28401 out.go:298] Setting JSON to false
	I0802 17:50:46.608182   28401 mustload.go:65] Loading cluster: ha-652395
	I0802 17:50:46.608276   28401 notify.go:220] Checking for updates...
	I0802 17:50:46.608657   28401 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:50:46.608674   28401 status.go:255] checking status of ha-652395 ...
	I0802 17:50:46.609132   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:46.609202   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:46.628197   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0802 17:50:46.628662   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:46.629229   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:46.629251   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:46.629659   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:46.629870   28401 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:50:46.631441   28401 status.go:330] ha-652395 host status = "Running" (err=<nil>)
	I0802 17:50:46.631461   28401 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:50:46.631833   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:46.631870   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:46.646132   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34919
	I0802 17:50:46.646504   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:46.646973   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:46.647000   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:46.647319   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:46.647513   28401 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:50:46.649697   28401 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:46.650073   28401 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:50:46.650108   28401 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:46.650208   28401 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:50:46.650584   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:46.650627   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:46.665671   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
	I0802 17:50:46.666071   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:46.666491   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:46.666505   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:46.666820   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:46.667011   28401 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:50:46.667211   28401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:46.667236   28401 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:50:46.670102   28401 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:46.670281   28401 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:50:46.670310   28401 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:46.670485   28401 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:50:46.670627   28401 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:50:46.670770   28401 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:50:46.670904   28401 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:50:46.754217   28401 ssh_runner.go:195] Run: systemctl --version
	I0802 17:50:46.760604   28401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:46.774223   28401 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:50:46.774251   28401 api_server.go:166] Checking apiserver status ...
	I0802 17:50:46.774290   28401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:50:46.787120   28401 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup
	W0802 17:50:46.797085   28401 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:50:46.797126   28401 ssh_runner.go:195] Run: ls
	I0802 17:50:46.801131   28401 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:50:46.805569   28401 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:50:46.805585   28401 status.go:422] ha-652395 apiserver status = Running (err=<nil>)
	I0802 17:50:46.805594   28401 status.go:257] ha-652395 status: &{Name:ha-652395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:50:46.805610   28401 status.go:255] checking status of ha-652395-m02 ...
	I0802 17:50:46.805881   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:46.805913   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:46.820491   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
	I0802 17:50:46.820925   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:46.821387   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:46.821413   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:46.821707   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:46.821890   28401 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 17:50:46.823388   28401 status.go:330] ha-652395-m02 host status = "Running" (err=<nil>)
	I0802 17:50:46.823406   28401 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:50:46.823710   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:46.823748   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:46.838115   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44949
	I0802 17:50:46.838465   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:46.838877   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:46.838902   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:46.839226   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:46.839410   28401 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:50:46.842133   28401 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:46.842494   28401 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:50:46.842531   28401 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:46.842609   28401 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:50:46.842895   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:46.842928   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:46.857859   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0802 17:50:46.858283   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:46.858890   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:46.858911   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:46.859320   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:46.859553   28401 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:50:46.859776   28401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:46.859794   28401 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:50:46.862595   28401 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:46.863043   28401 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:50:46.863068   28401 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:46.863203   28401 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:50:46.863345   28401 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:50:46.863479   28401 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:50:46.863589   28401 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	W0802 17:50:47.803301   28401 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:47.803363   28401 retry.go:31] will retry after 345.35785ms: dial tcp 192.168.39.220:22: connect: no route to host
	W0802 17:50:51.227356   28401 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.220:22: connect: no route to host
	W0802 17:50:51.227458   28401 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	E0802 17:50:51.227481   28401 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:51.227493   28401 status.go:257] ha-652395-m02 status: &{Name:ha-652395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0802 17:50:51.227520   28401 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:51.227532   28401 status.go:255] checking status of ha-652395-m03 ...
	I0802 17:50:51.227943   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:51.227991   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:51.243253   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43335
	I0802 17:50:51.243720   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:51.244177   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:51.244199   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:51.244556   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:51.244746   28401 main.go:141] libmachine: (ha-652395-m03) Calling .GetState
	I0802 17:50:51.246585   28401 status.go:330] ha-652395-m03 host status = "Running" (err=<nil>)
	I0802 17:50:51.246604   28401 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:50:51.246954   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:51.246999   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:51.262258   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I0802 17:50:51.262709   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:51.263175   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:51.263196   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:51.263487   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:51.263683   28401 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:50:51.266205   28401 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:51.266612   28401 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:50:51.266663   28401 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:51.266824   28401 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:50:51.267294   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:51.267348   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:51.281555   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41017
	I0802 17:50:51.281995   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:51.282432   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:51.282460   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:51.282856   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:51.283053   28401 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:50:51.283272   28401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:51.283295   28401 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:50:51.286067   28401 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:51.286527   28401 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:50:51.286558   28401 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:51.286658   28401 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:50:51.286826   28401 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:50:51.286969   28401 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:50:51.287120   28401 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:50:51.366172   28401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:51.380858   28401 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:50:51.380889   28401 api_server.go:166] Checking apiserver status ...
	I0802 17:50:51.380941   28401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:50:51.393981   28401 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0802 17:50:51.403387   28401 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:50:51.403432   28401 ssh_runner.go:195] Run: ls
	I0802 17:50:51.407347   28401 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:50:51.411974   28401 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:50:51.412000   28401 status.go:422] ha-652395-m03 apiserver status = Running (err=<nil>)
	I0802 17:50:51.412011   28401 status.go:257] ha-652395-m03 status: &{Name:ha-652395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:50:51.412030   28401 status.go:255] checking status of ha-652395-m04 ...
	I0802 17:50:51.412456   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:51.412493   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:51.427151   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I0802 17:50:51.427691   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:51.428149   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:51.428169   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:51.428494   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:51.428703   28401 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 17:50:51.430331   28401 status.go:330] ha-652395-m04 host status = "Running" (err=<nil>)
	I0802 17:50:51.430352   28401 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:50:51.430672   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:51.430720   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:51.445087   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0802 17:50:51.445491   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:51.446005   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:51.446028   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:51.446302   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:51.446463   28401 main.go:141] libmachine: (ha-652395-m04) Calling .GetIP
	I0802 17:50:51.449168   28401 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:51.449556   28401 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:50:51.449587   28401 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:51.449761   28401 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:50:51.450053   28401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:51.450086   28401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:51.464823   28401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34283
	I0802 17:50:51.465195   28401 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:51.465695   28401 main.go:141] libmachine: Using API Version  1
	I0802 17:50:51.465717   28401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:51.465983   28401 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:51.466152   28401 main.go:141] libmachine: (ha-652395-m04) Calling .DriverName
	I0802 17:50:51.466322   28401 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:51.466345   28401 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHHostname
	I0802 17:50:51.469134   28401 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:51.469551   28401 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:50:51.469578   28401 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:51.469726   28401 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHPort
	I0802 17:50:51.469905   28401 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHKeyPath
	I0802 17:50:51.470050   28401 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHUsername
	I0802 17:50:51.470180   28401 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m04/id_rsa Username:docker}
	I0802 17:50:51.549757   28401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:51.565251   28401 status.go:257] ha-652395-m04 status: &{Name:ha-652395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr: exit status 3 (4.399468827s)

                                                
                                                
-- stdout --
	ha-652395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-652395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:50:53.487586   28517 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:50:53.487703   28517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:50:53.487713   28517 out.go:304] Setting ErrFile to fd 2...
	I0802 17:50:53.487719   28517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:50:53.487901   28517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:50:53.488075   28517 out.go:298] Setting JSON to false
	I0802 17:50:53.488100   28517 mustload.go:65] Loading cluster: ha-652395
	I0802 17:50:53.488189   28517 notify.go:220] Checking for updates...
	I0802 17:50:53.488586   28517 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:50:53.488603   28517 status.go:255] checking status of ha-652395 ...
	I0802 17:50:53.488999   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:53.489067   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:53.504146   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38489
	I0802 17:50:53.504562   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:53.505106   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:53.505120   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:53.505452   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:53.505672   28517 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:50:53.507046   28517 status.go:330] ha-652395 host status = "Running" (err=<nil>)
	I0802 17:50:53.507061   28517 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:50:53.507364   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:53.507397   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:53.523156   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42531
	I0802 17:50:53.523535   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:53.524004   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:53.524038   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:53.524446   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:53.524682   28517 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:50:53.527491   28517 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:53.527872   28517 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:50:53.527894   28517 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:53.528056   28517 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:50:53.528362   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:53.528394   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:53.543716   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0802 17:50:53.544134   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:53.544553   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:53.544575   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:53.544866   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:53.545067   28517 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:50:53.545263   28517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:53.545302   28517 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:50:53.547954   28517 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:53.548349   28517 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:50:53.548376   28517 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:50:53.548500   28517 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:50:53.548667   28517 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:50:53.548813   28517 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:50:53.548960   28517 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:50:53.630168   28517 ssh_runner.go:195] Run: systemctl --version
	I0802 17:50:53.636036   28517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:53.650447   28517 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:50:53.650473   28517 api_server.go:166] Checking apiserver status ...
	I0802 17:50:53.650502   28517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:50:53.663481   28517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup
	W0802 17:50:53.673227   28517 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:50:53.673298   28517 ssh_runner.go:195] Run: ls
	I0802 17:50:53.677211   28517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:50:53.683026   28517 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:50:53.683046   28517 status.go:422] ha-652395 apiserver status = Running (err=<nil>)
	I0802 17:50:53.683061   28517 status.go:257] ha-652395 status: &{Name:ha-652395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:50:53.683075   28517 status.go:255] checking status of ha-652395-m02 ...
	I0802 17:50:53.683393   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:53.683428   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:53.698380   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0802 17:50:53.698748   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:53.699175   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:53.699199   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:53.699505   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:53.699751   28517 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 17:50:53.701189   28517 status.go:330] ha-652395-m02 host status = "Running" (err=<nil>)
	I0802 17:50:53.701206   28517 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:50:53.701537   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:53.701577   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:53.716402   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I0802 17:50:53.716769   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:53.717198   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:53.717234   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:53.717544   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:53.717745   28517 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:50:53.720404   28517 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:53.720870   28517 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:50:53.720893   28517 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:53.721040   28517 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:50:53.721397   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:53.721435   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:53.736368   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0802 17:50:53.736790   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:53.737225   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:53.737245   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:53.737582   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:53.737775   28517 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:50:53.737965   28517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:53.737983   28517 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:50:53.740831   28517 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:53.741214   28517 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:50:53.741238   28517 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:50:53.741390   28517 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:50:53.741546   28517 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:50:53.741694   28517 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:50:53.741822   28517 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	W0802 17:50:54.299306   28517 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:54.299366   28517 retry.go:31] will retry after 150.553689ms: dial tcp 192.168.39.220:22: connect: no route to host
	W0802 17:50:57.499368   28517 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.220:22: connect: no route to host
	W0802 17:50:57.499446   28517 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	E0802 17:50:57.499461   28517 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:57.499467   28517 status.go:257] ha-652395-m02 status: &{Name:ha-652395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0802 17:50:57.499502   28517 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:50:57.499509   28517 status.go:255] checking status of ha-652395-m03 ...
	I0802 17:50:57.499799   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:57.499840   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:57.515587   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40875
	I0802 17:50:57.516022   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:57.516506   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:57.516531   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:57.516854   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:57.517060   28517 main.go:141] libmachine: (ha-652395-m03) Calling .GetState
	I0802 17:50:57.518573   28517 status.go:330] ha-652395-m03 host status = "Running" (err=<nil>)
	I0802 17:50:57.518590   28517 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:50:57.518876   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:57.518912   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:57.534160   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44119
	I0802 17:50:57.534560   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:57.534990   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:57.535015   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:57.535391   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:57.535609   28517 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:50:57.538665   28517 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:57.539187   28517 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:50:57.539219   28517 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:57.539366   28517 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:50:57.539672   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:57.539717   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:57.555951   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42417
	I0802 17:50:57.556336   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:57.556816   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:57.556836   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:57.557127   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:57.557334   28517 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:50:57.557532   28517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:57.557557   28517 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:50:57.560584   28517 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:57.560984   28517 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:50:57.561025   28517 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:50:57.561191   28517 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:50:57.561351   28517 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:50:57.561480   28517 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:50:57.561605   28517 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:50:57.646029   28517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:57.659350   28517 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:50:57.659385   28517 api_server.go:166] Checking apiserver status ...
	I0802 17:50:57.659435   28517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:50:57.673068   28517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0802 17:50:57.682651   28517 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:50:57.682709   28517 ssh_runner.go:195] Run: ls
	I0802 17:50:57.686723   28517 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:50:57.693821   28517 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:50:57.693844   28517 status.go:422] ha-652395-m03 apiserver status = Running (err=<nil>)
	I0802 17:50:57.693852   28517 status.go:257] ha-652395-m03 status: &{Name:ha-652395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:50:57.693868   28517 status.go:255] checking status of ha-652395-m04 ...
	I0802 17:50:57.694159   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:57.694190   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:57.709155   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41313
	I0802 17:50:57.709543   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:57.709986   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:57.710010   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:57.710278   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:57.710451   28517 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 17:50:57.711863   28517 status.go:330] ha-652395-m04 host status = "Running" (err=<nil>)
	I0802 17:50:57.711891   28517 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:50:57.712148   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:57.712181   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:57.727267   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33139
	I0802 17:50:57.727687   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:57.728084   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:57.728112   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:57.728458   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:57.728652   28517 main.go:141] libmachine: (ha-652395-m04) Calling .GetIP
	I0802 17:50:57.731950   28517 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:57.732420   28517 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:50:57.732454   28517 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:57.732693   28517 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:50:57.732967   28517 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:50:57.732998   28517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:50:57.748198   28517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44175
	I0802 17:50:57.748643   28517 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:50:57.749135   28517 main.go:141] libmachine: Using API Version  1
	I0802 17:50:57.749156   28517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:50:57.749461   28517 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:50:57.749634   28517 main.go:141] libmachine: (ha-652395-m04) Calling .DriverName
	I0802 17:50:57.749808   28517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:50:57.749827   28517 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHHostname
	I0802 17:50:57.752409   28517 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:57.752826   28517 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:50:57.752852   28517 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:50:57.752961   28517 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHPort
	I0802 17:50:57.753139   28517 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHKeyPath
	I0802 17:50:57.753270   28517 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHUsername
	I0802 17:50:57.753416   28517 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m04/id_rsa Username:docker}
	I0802 17:50:57.830468   28517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:50:57.844413   28517 status.go:257] ha-652395-m04 status: &{Name:ha-652395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr: exit status 3 (3.68627372s)

                                                
                                                
-- stdout --
	ha-652395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-652395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:51:02.356571   28618 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:51:02.356869   28618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:51:02.356880   28618 out.go:304] Setting ErrFile to fd 2...
	I0802 17:51:02.356888   28618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:51:02.357141   28618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:51:02.357358   28618 out.go:298] Setting JSON to false
	I0802 17:51:02.357388   28618 mustload.go:65] Loading cluster: ha-652395
	I0802 17:51:02.357508   28618 notify.go:220] Checking for updates...
	I0802 17:51:02.357839   28618 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:51:02.357880   28618 status.go:255] checking status of ha-652395 ...
	I0802 17:51:02.358238   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:02.358303   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:02.377821   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0802 17:51:02.378246   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:02.378951   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:02.378984   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:02.379420   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:02.379614   28618 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:51:02.381171   28618 status.go:330] ha-652395 host status = "Running" (err=<nil>)
	I0802 17:51:02.381189   28618 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:51:02.381464   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:02.381501   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:02.396375   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36385
	I0802 17:51:02.396731   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:02.397208   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:02.397226   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:02.397579   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:02.397841   28618 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:51:02.400728   28618 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:02.401208   28618 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:51:02.401247   28618 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:02.401351   28618 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:51:02.401752   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:02.401826   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:02.416613   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0802 17:51:02.416979   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:02.417435   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:02.417457   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:02.417772   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:02.417981   28618 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:51:02.418164   28618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:51:02.418188   28618 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:51:02.421300   28618 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:02.421781   28618 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:51:02.421804   28618 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:02.422019   28618 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:51:02.422196   28618 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:51:02.422343   28618 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:51:02.422453   28618 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:51:02.502530   28618 ssh_runner.go:195] Run: systemctl --version
	I0802 17:51:02.508870   28618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:51:02.522741   28618 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:51:02.522765   28618 api_server.go:166] Checking apiserver status ...
	I0802 17:51:02.522801   28618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:51:02.536585   28618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup
	W0802 17:51:02.545226   28618 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:51:02.545270   28618 ssh_runner.go:195] Run: ls
	I0802 17:51:02.549245   28618 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:51:02.553496   28618 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:51:02.553513   28618 status.go:422] ha-652395 apiserver status = Running (err=<nil>)
	I0802 17:51:02.553522   28618 status.go:257] ha-652395 status: &{Name:ha-652395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:51:02.553537   28618 status.go:255] checking status of ha-652395-m02 ...
	I0802 17:51:02.553824   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:02.553858   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:02.568773   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0802 17:51:02.569284   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:02.569788   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:02.569814   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:02.570143   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:02.570336   28618 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 17:51:02.571799   28618 status.go:330] ha-652395-m02 host status = "Running" (err=<nil>)
	I0802 17:51:02.571815   28618 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:51:02.572087   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:02.572119   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:02.586391   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33851
	I0802 17:51:02.586761   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:02.587241   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:02.587258   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:02.587528   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:02.587700   28618 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:51:02.590360   28618 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:51:02.590820   28618 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:51:02.590842   28618 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:51:02.590968   28618 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:51:02.591385   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:02.591427   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:02.605627   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39897
	I0802 17:51:02.605970   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:02.606444   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:02.606466   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:02.606770   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:02.606950   28618 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:51:02.607117   28618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:51:02.607142   28618 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:51:02.609792   28618 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:51:02.610182   28618 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:51:02.610219   28618 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:51:02.610317   28618 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:51:02.610515   28618 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:51:02.610663   28618 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:51:02.610807   28618 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	W0802 17:51:05.663396   28618 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.220:22: connect: no route to host
	W0802 17:51:05.663504   28618 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	E0802 17:51:05.663525   28618 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:51:05.663540   28618 status.go:257] ha-652395-m02 status: &{Name:ha-652395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0802 17:51:05.663558   28618 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:51:05.663565   28618 status.go:255] checking status of ha-652395-m03 ...
	I0802 17:51:05.663853   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:05.663892   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:05.678692   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46683
	I0802 17:51:05.679130   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:05.679611   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:05.679636   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:05.679991   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:05.680189   28618 main.go:141] libmachine: (ha-652395-m03) Calling .GetState
	I0802 17:51:05.681879   28618 status.go:330] ha-652395-m03 host status = "Running" (err=<nil>)
	I0802 17:51:05.681892   28618 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:51:05.682166   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:05.682198   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:05.698144   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36063
	I0802 17:51:05.698624   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:05.699166   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:05.699188   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:05.699473   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:05.699703   28618 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:51:05.702335   28618 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:05.702794   28618 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:51:05.702820   28618 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:05.702926   28618 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:51:05.703259   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:05.703293   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:05.717447   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41515
	I0802 17:51:05.717946   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:05.718514   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:05.718535   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:05.718794   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:05.718962   28618 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:51:05.719407   28618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:51:05.719425   28618 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:51:05.722503   28618 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:05.723028   28618 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:51:05.723062   28618 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:05.723238   28618 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:51:05.723385   28618 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:51:05.723556   28618 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:51:05.723696   28618 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:51:05.802466   28618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:51:05.816475   28618 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:51:05.816499   28618 api_server.go:166] Checking apiserver status ...
	I0802 17:51:05.816528   28618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:51:05.829803   28618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0802 17:51:05.838803   28618 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:51:05.838844   28618 ssh_runner.go:195] Run: ls
	I0802 17:51:05.842926   28618 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:51:05.848472   28618 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:51:05.848491   28618 status.go:422] ha-652395-m03 apiserver status = Running (err=<nil>)
	I0802 17:51:05.848508   28618 status.go:257] ha-652395-m03 status: &{Name:ha-652395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:51:05.848524   28618 status.go:255] checking status of ha-652395-m04 ...
	I0802 17:51:05.848793   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:05.848822   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:05.863436   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0802 17:51:05.863883   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:05.864315   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:05.864335   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:05.864709   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:05.864882   28618 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 17:51:05.866512   28618 status.go:330] ha-652395-m04 host status = "Running" (err=<nil>)
	I0802 17:51:05.866526   28618 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:51:05.866795   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:05.866825   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:05.882520   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
	I0802 17:51:05.882936   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:05.883478   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:05.883503   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:05.883849   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:05.884008   28618 main.go:141] libmachine: (ha-652395-m04) Calling .GetIP
	I0802 17:51:05.887016   28618 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:05.887428   28618 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:51:05.887454   28618 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:05.887575   28618 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:51:05.887900   28618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:05.887944   28618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:05.903167   28618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I0802 17:51:05.903701   28618 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:05.904172   28618 main.go:141] libmachine: Using API Version  1
	I0802 17:51:05.904197   28618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:05.904673   28618 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:05.904875   28618 main.go:141] libmachine: (ha-652395-m04) Calling .DriverName
	I0802 17:51:05.905048   28618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:51:05.905071   28618 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHHostname
	I0802 17:51:05.907569   28618 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:05.907977   28618 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:51:05.907998   28618 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:05.908111   28618 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHPort
	I0802 17:51:05.908260   28618 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHKeyPath
	I0802 17:51:05.908394   28618 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHUsername
	I0802 17:51:05.908558   28618 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m04/id_rsa Username:docker}
	I0802 17:51:05.986857   28618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:51:06.002628   28618 status.go:257] ha-652395-m04 status: &{Name:ha-652395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr: exit status 3 (3.72087281s)

                                                
                                                
-- stdout --
	ha-652395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-652395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:51:09.105695   28736 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:51:09.106071   28736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:51:09.106233   28736 out.go:304] Setting ErrFile to fd 2...
	I0802 17:51:09.106249   28736 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:51:09.106561   28736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:51:09.106735   28736 out.go:298] Setting JSON to false
	I0802 17:51:09.106758   28736 mustload.go:65] Loading cluster: ha-652395
	I0802 17:51:09.106798   28736 notify.go:220] Checking for updates...
	I0802 17:51:09.107261   28736 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:51:09.107283   28736 status.go:255] checking status of ha-652395 ...
	I0802 17:51:09.107705   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:09.107777   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:09.123550   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0802 17:51:09.123973   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:09.124616   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:09.124633   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:09.125090   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:09.125309   28736 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:51:09.126978   28736 status.go:330] ha-652395 host status = "Running" (err=<nil>)
	I0802 17:51:09.126995   28736 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:51:09.127342   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:09.127393   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:09.142012   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38137
	I0802 17:51:09.142470   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:09.142907   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:09.142928   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:09.143313   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:09.143518   28736 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:51:09.146270   28736 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:09.146678   28736 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:51:09.146716   28736 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:09.146886   28736 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:51:09.147179   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:09.147217   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:09.163372   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46567
	I0802 17:51:09.163739   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:09.164182   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:09.164206   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:09.164568   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:09.164793   28736 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:51:09.165074   28736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:51:09.165102   28736 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:51:09.167831   28736 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:09.168191   28736 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:51:09.168212   28736 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:09.168383   28736 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:51:09.168559   28736 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:51:09.168716   28736 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:51:09.168867   28736 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:51:09.250495   28736 ssh_runner.go:195] Run: systemctl --version
	I0802 17:51:09.256614   28736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:51:09.270434   28736 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:51:09.270459   28736 api_server.go:166] Checking apiserver status ...
	I0802 17:51:09.270489   28736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:51:09.284889   28736 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup
	W0802 17:51:09.300401   28736 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:51:09.300464   28736 ssh_runner.go:195] Run: ls
	I0802 17:51:09.305319   28736 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:51:09.309668   28736 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:51:09.309689   28736 status.go:422] ha-652395 apiserver status = Running (err=<nil>)
	I0802 17:51:09.309701   28736 status.go:257] ha-652395 status: &{Name:ha-652395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:51:09.309718   28736 status.go:255] checking status of ha-652395-m02 ...
	I0802 17:51:09.310021   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:09.310066   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:09.324636   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I0802 17:51:09.325043   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:09.325539   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:09.325561   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:09.325922   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:09.326130   28736 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 17:51:09.327845   28736 status.go:330] ha-652395-m02 host status = "Running" (err=<nil>)
	I0802 17:51:09.327860   28736 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:51:09.328179   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:09.328214   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:09.343473   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38417
	I0802 17:51:09.343889   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:09.344304   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:09.344326   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:09.344651   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:09.344818   28736 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:51:09.347937   28736 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:51:09.348361   28736 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:51:09.348388   28736 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:51:09.348546   28736 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 17:51:09.348901   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:09.348937   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:09.363706   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I0802 17:51:09.364115   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:09.364555   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:09.364572   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:09.364877   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:09.365054   28736 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:51:09.365245   28736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:51:09.365267   28736 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:51:09.368211   28736 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:51:09.368779   28736 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:51:09.368797   28736 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:51:09.368980   28736 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:51:09.369153   28736 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:51:09.369339   28736 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:51:09.369505   28736 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	W0802 17:51:12.447372   28736 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.220:22: connect: no route to host
	W0802 17:51:12.447475   28736 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	E0802 17:51:12.447493   28736 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:51:12.447503   28736 status.go:257] ha-652395-m02 status: &{Name:ha-652395-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0802 17:51:12.447520   28736 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.220:22: connect: no route to host
	I0802 17:51:12.447531   28736 status.go:255] checking status of ha-652395-m03 ...
	I0802 17:51:12.447847   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:12.447893   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:12.462439   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33933
	I0802 17:51:12.462867   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:12.463427   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:12.463459   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:12.463780   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:12.463979   28736 main.go:141] libmachine: (ha-652395-m03) Calling .GetState
	I0802 17:51:12.465783   28736 status.go:330] ha-652395-m03 host status = "Running" (err=<nil>)
	I0802 17:51:12.465803   28736 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:51:12.466093   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:12.466134   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:12.480389   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44489
	I0802 17:51:12.480758   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:12.481177   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:12.481198   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:12.481507   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:12.481709   28736 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:51:12.484527   28736 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:12.484922   28736 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:51:12.484957   28736 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:12.485066   28736 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:51:12.485361   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:12.485410   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:12.499560   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43415
	I0802 17:51:12.499934   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:12.500371   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:12.500397   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:12.500708   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:12.500912   28736 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:51:12.501082   28736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:51:12.501100   28736 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:51:12.504088   28736 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:12.504534   28736 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:51:12.504560   28736 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:12.504757   28736 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:51:12.504939   28736 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:51:12.505079   28736 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:51:12.505198   28736 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:51:12.586286   28736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:51:12.600951   28736 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:51:12.600980   28736 api_server.go:166] Checking apiserver status ...
	I0802 17:51:12.601010   28736 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:51:12.614674   28736 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0802 17:51:12.623806   28736 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:51:12.623898   28736 ssh_runner.go:195] Run: ls
	I0802 17:51:12.628490   28736 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:51:12.635021   28736 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:51:12.635044   28736 status.go:422] ha-652395-m03 apiserver status = Running (err=<nil>)
	I0802 17:51:12.635052   28736 status.go:257] ha-652395-m03 status: &{Name:ha-652395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:51:12.635075   28736 status.go:255] checking status of ha-652395-m04 ...
	I0802 17:51:12.635413   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:12.635451   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:12.650662   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I0802 17:51:12.651062   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:12.651509   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:12.651529   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:12.651827   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:12.652006   28736 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 17:51:12.653600   28736 status.go:330] ha-652395-m04 host status = "Running" (err=<nil>)
	I0802 17:51:12.653617   28736 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:51:12.653893   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:12.653925   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:12.668004   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36693
	I0802 17:51:12.668380   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:12.668802   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:12.668832   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:12.669199   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:12.669406   28736 main.go:141] libmachine: (ha-652395-m04) Calling .GetIP
	I0802 17:51:12.672439   28736 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:12.672970   28736 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:51:12.673000   28736 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:12.673154   28736 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:51:12.673424   28736 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:12.673455   28736 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:12.687807   28736 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I0802 17:51:12.688143   28736 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:12.688605   28736 main.go:141] libmachine: Using API Version  1
	I0802 17:51:12.688628   28736 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:12.688913   28736 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:12.689088   28736 main.go:141] libmachine: (ha-652395-m04) Calling .DriverName
	I0802 17:51:12.689280   28736 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:51:12.689299   28736 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHHostname
	I0802 17:51:12.692207   28736 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:12.692576   28736 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:51:12.692601   28736 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:12.692706   28736 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHPort
	I0802 17:51:12.692861   28736 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHKeyPath
	I0802 17:51:12.693033   28736 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHUsername
	I0802 17:51:12.693177   28736 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m04/id_rsa Username:docker}
	I0802 17:51:12.770443   28736 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:51:12.784528   28736 status.go:257] ha-652395-m04 status: &{Name:ha-652395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr: exit status 7 (610.799052ms)

                                                
                                                
-- stdout --
	ha-652395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-652395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:51:23.034301   28888 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:51:23.034427   28888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:51:23.034437   28888 out.go:304] Setting ErrFile to fd 2...
	I0802 17:51:23.034443   28888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:51:23.034643   28888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:51:23.034840   28888 out.go:298] Setting JSON to false
	I0802 17:51:23.034869   28888 mustload.go:65] Loading cluster: ha-652395
	I0802 17:51:23.034951   28888 notify.go:220] Checking for updates...
	I0802 17:51:23.035313   28888 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:51:23.035331   28888 status.go:255] checking status of ha-652395 ...
	I0802 17:51:23.035747   28888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:23.035810   28888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:23.054743   28888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0802 17:51:23.055201   28888 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:23.055896   28888 main.go:141] libmachine: Using API Version  1
	I0802 17:51:23.055915   28888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:23.056258   28888 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:23.056463   28888 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:51:23.058543   28888 status.go:330] ha-652395 host status = "Running" (err=<nil>)
	I0802 17:51:23.058559   28888 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:51:23.058850   28888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:23.058889   28888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:23.074748   28888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38713
	I0802 17:51:23.075232   28888 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:23.075705   28888 main.go:141] libmachine: Using API Version  1
	I0802 17:51:23.075738   28888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:23.076047   28888 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:23.076328   28888 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:51:23.079160   28888 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:23.079561   28888 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:51:23.079591   28888 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:23.079727   28888 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:51:23.080128   28888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:23.080176   28888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:23.094606   28888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36725
	I0802 17:51:23.094996   28888 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:23.095563   28888 main.go:141] libmachine: Using API Version  1
	I0802 17:51:23.095589   28888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:23.095930   28888 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:23.096115   28888 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:51:23.096286   28888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:51:23.096307   28888 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:51:23.099122   28888 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:23.099655   28888 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:51:23.099687   28888 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:51:23.099798   28888 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:51:23.099944   28888 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:51:23.100090   28888 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:51:23.100231   28888 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:51:23.183666   28888 ssh_runner.go:195] Run: systemctl --version
	I0802 17:51:23.190063   28888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:51:23.205033   28888 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:51:23.205060   28888 api_server.go:166] Checking apiserver status ...
	I0802 17:51:23.205106   28888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:51:23.220400   28888 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup
	W0802 17:51:23.231554   28888 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1207/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:51:23.231623   28888 ssh_runner.go:195] Run: ls
	I0802 17:51:23.235977   28888 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:51:23.242126   28888 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:51:23.242152   28888 status.go:422] ha-652395 apiserver status = Running (err=<nil>)
	I0802 17:51:23.242165   28888 status.go:257] ha-652395 status: &{Name:ha-652395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:51:23.242183   28888 status.go:255] checking status of ha-652395-m02 ...
	I0802 17:51:23.242514   28888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:23.242558   28888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:23.257020   28888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I0802 17:51:23.257419   28888 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:23.257831   28888 main.go:141] libmachine: Using API Version  1
	I0802 17:51:23.257853   28888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:23.258198   28888 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:23.258395   28888 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 17:51:23.260090   28888 status.go:330] ha-652395-m02 host status = "Stopped" (err=<nil>)
	I0802 17:51:23.260105   28888 status.go:343] host is not running, skipping remaining checks
	I0802 17:51:23.260119   28888 status.go:257] ha-652395-m02 status: &{Name:ha-652395-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:51:23.260139   28888 status.go:255] checking status of ha-652395-m03 ...
	I0802 17:51:23.260416   28888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:23.260461   28888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:23.275641   28888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45883
	I0802 17:51:23.276010   28888 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:23.276445   28888 main.go:141] libmachine: Using API Version  1
	I0802 17:51:23.276512   28888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:23.276866   28888 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:23.277087   28888 main.go:141] libmachine: (ha-652395-m03) Calling .GetState
	I0802 17:51:23.278615   28888 status.go:330] ha-652395-m03 host status = "Running" (err=<nil>)
	I0802 17:51:23.278631   28888 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:51:23.278922   28888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:23.278963   28888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:23.295856   28888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
	I0802 17:51:23.296332   28888 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:23.296851   28888 main.go:141] libmachine: Using API Version  1
	I0802 17:51:23.296871   28888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:23.297195   28888 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:23.297379   28888 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:51:23.300633   28888 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:23.301066   28888 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:51:23.301089   28888 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:23.301255   28888 host.go:66] Checking if "ha-652395-m03" exists ...
	I0802 17:51:23.301569   28888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:23.301624   28888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:23.316260   28888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40459
	I0802 17:51:23.316721   28888 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:23.317234   28888 main.go:141] libmachine: Using API Version  1
	I0802 17:51:23.317267   28888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:23.317609   28888 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:23.317795   28888 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:51:23.317959   28888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:51:23.317980   28888 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:51:23.320791   28888 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:23.321260   28888 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:51:23.321289   28888 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:23.321443   28888 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:51:23.321636   28888 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:51:23.321829   28888 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:51:23.321965   28888 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:51:23.402881   28888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:51:23.416218   28888 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 17:51:23.416246   28888 api_server.go:166] Checking apiserver status ...
	I0802 17:51:23.416286   28888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:51:23.431970   28888 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	W0802 17:51:23.440873   28888 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 17:51:23.440922   28888 ssh_runner.go:195] Run: ls
	I0802 17:51:23.450157   28888 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 17:51:23.454325   28888 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 17:51:23.454346   28888 status.go:422] ha-652395-m03 apiserver status = Running (err=<nil>)
	I0802 17:51:23.454353   28888 status.go:257] ha-652395-m03 status: &{Name:ha-652395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 17:51:23.454367   28888 status.go:255] checking status of ha-652395-m04 ...
	I0802 17:51:23.454719   28888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:23.454765   28888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:23.469222   28888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39197
	I0802 17:51:23.469710   28888 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:23.470155   28888 main.go:141] libmachine: Using API Version  1
	I0802 17:51:23.470182   28888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:23.470484   28888 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:23.470641   28888 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 17:51:23.472074   28888 status.go:330] ha-652395-m04 host status = "Running" (err=<nil>)
	I0802 17:51:23.472088   28888 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:51:23.472347   28888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:23.472377   28888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:23.486294   28888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41389
	I0802 17:51:23.486723   28888 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:23.487216   28888 main.go:141] libmachine: Using API Version  1
	I0802 17:51:23.487241   28888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:23.487567   28888 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:23.487730   28888 main.go:141] libmachine: (ha-652395-m04) Calling .GetIP
	I0802 17:51:23.490402   28888 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:23.490872   28888 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:51:23.490901   28888 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:23.490999   28888 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 17:51:23.491330   28888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:23.491373   28888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:23.506307   28888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34429
	I0802 17:51:23.506766   28888 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:23.507249   28888 main.go:141] libmachine: Using API Version  1
	I0802 17:51:23.507269   28888 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:23.507611   28888 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:23.507790   28888 main.go:141] libmachine: (ha-652395-m04) Calling .DriverName
	I0802 17:51:23.507989   28888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 17:51:23.508014   28888 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHHostname
	I0802 17:51:23.510542   28888 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:23.510927   28888 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:51:23.510955   28888 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:23.511092   28888 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHPort
	I0802 17:51:23.511275   28888 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHKeyPath
	I0802 17:51:23.511450   28888 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHUsername
	I0802 17:51:23.511603   28888 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m04/id_rsa Username:docker}
	I0802 17:51:23.586568   28888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:51:23.603575   28888 status.go:257] ha-652395-m04 status: &{Name:ha-652395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-652395 -n ha-652395
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-652395 logs -n 25: (1.297542971s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395:/home/docker/cp-test_ha-652395-m03_ha-652395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395 sudo cat                                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m03_ha-652395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m02:/home/docker/cp-test_ha-652395-m03_ha-652395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m02 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m03_ha-652395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04:/home/docker/cp-test_ha-652395-m03_ha-652395-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m04 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m03_ha-652395-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp testdata/cp-test.txt                                                | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2210744680/001/cp-test_ha-652395-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395:/home/docker/cp-test_ha-652395-m04_ha-652395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395 sudo cat                                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m02:/home/docker/cp-test_ha-652395-m04_ha-652395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m02 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03:/home/docker/cp-test_ha-652395-m04_ha-652395-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m03 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-652395 node stop m02 -v=7                                                     | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-652395 node start m02 -v=7                                                    | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:43:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:43:27.532885   23378 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:43:27.533001   23378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:43:27.533009   23378 out.go:304] Setting ErrFile to fd 2...
	I0802 17:43:27.533014   23378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:43:27.533193   23378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:43:27.533719   23378 out.go:298] Setting JSON to false
	I0802 17:43:27.534584   23378 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1551,"bootTime":1722619056,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:43:27.534653   23378 start.go:139] virtualization: kvm guest
	I0802 17:43:27.536601   23378 out.go:177] * [ha-652395] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:43:27.537875   23378 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 17:43:27.537935   23378 notify.go:220] Checking for updates...
	I0802 17:43:27.540169   23378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:43:27.541454   23378 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:43:27.542558   23378 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:43:27.543731   23378 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 17:43:27.544829   23378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 17:43:27.546055   23378 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:43:27.579712   23378 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 17:43:27.580856   23378 start.go:297] selected driver: kvm2
	I0802 17:43:27.580872   23378 start.go:901] validating driver "kvm2" against <nil>
	I0802 17:43:27.580894   23378 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 17:43:27.581571   23378 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:43:27.581645   23378 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:43:27.597294   23378 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:43:27.597338   23378 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 17:43:27.597546   23378 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:43:27.597599   23378 cni.go:84] Creating CNI manager for ""
	I0802 17:43:27.597611   23378 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0802 17:43:27.597616   23378 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0802 17:43:27.597669   23378 start.go:340] cluster config:
	{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0802 17:43:27.597769   23378 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:43:27.600220   23378 out.go:177] * Starting "ha-652395" primary control-plane node in "ha-652395" cluster
	I0802 17:43:27.601213   23378 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:43:27.601246   23378 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 17:43:27.601256   23378 cache.go:56] Caching tarball of preloaded images
	I0802 17:43:27.601342   23378 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 17:43:27.601353   23378 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 17:43:27.601668   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:43:27.601693   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json: {Name:mk3e0527528bd55e492678cbdc26edd1c1b05506 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:27.601826   23378 start.go:360] acquireMachinesLock for ha-652395: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 17:43:27.601855   23378 start.go:364] duration metric: took 16.128µs to acquireMachinesLock for "ha-652395"
	I0802 17:43:27.601871   23378 start.go:93] Provisioning new machine with config: &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:43:27.601926   23378 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 17:43:27.603424   23378 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 17:43:27.603563   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:43:27.603607   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:43:27.617511   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I0802 17:43:27.617942   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:43:27.618488   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:43:27.618508   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:43:27.618824   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:43:27.619007   23378 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:43:27.619196   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:27.619335   23378 start.go:159] libmachine.API.Create for "ha-652395" (driver="kvm2")
	I0802 17:43:27.619358   23378 client.go:168] LocalClient.Create starting
	I0802 17:43:27.619382   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 17:43:27.619410   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:43:27.619432   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:43:27.619484   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 17:43:27.619502   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:43:27.619515   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:43:27.619530   23378 main.go:141] libmachine: Running pre-create checks...
	I0802 17:43:27.619538   23378 main.go:141] libmachine: (ha-652395) Calling .PreCreateCheck
	I0802 17:43:27.619938   23378 main.go:141] libmachine: (ha-652395) Calling .GetConfigRaw
	I0802 17:43:27.620343   23378 main.go:141] libmachine: Creating machine...
	I0802 17:43:27.620359   23378 main.go:141] libmachine: (ha-652395) Calling .Create
	I0802 17:43:27.620483   23378 main.go:141] libmachine: (ha-652395) Creating KVM machine...
	I0802 17:43:27.621647   23378 main.go:141] libmachine: (ha-652395) DBG | found existing default KVM network
	I0802 17:43:27.622422   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:27.622287   23401 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0802 17:43:27.622460   23378 main.go:141] libmachine: (ha-652395) DBG | created network xml: 
	I0802 17:43:27.622468   23378 main.go:141] libmachine: (ha-652395) DBG | <network>
	I0802 17:43:27.622474   23378 main.go:141] libmachine: (ha-652395) DBG |   <name>mk-ha-652395</name>
	I0802 17:43:27.622481   23378 main.go:141] libmachine: (ha-652395) DBG |   <dns enable='no'/>
	I0802 17:43:27.622493   23378 main.go:141] libmachine: (ha-652395) DBG |   
	I0802 17:43:27.622504   23378 main.go:141] libmachine: (ha-652395) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0802 17:43:27.622515   23378 main.go:141] libmachine: (ha-652395) DBG |     <dhcp>
	I0802 17:43:27.622527   23378 main.go:141] libmachine: (ha-652395) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0802 17:43:27.622539   23378 main.go:141] libmachine: (ha-652395) DBG |     </dhcp>
	I0802 17:43:27.622555   23378 main.go:141] libmachine: (ha-652395) DBG |   </ip>
	I0802 17:43:27.622585   23378 main.go:141] libmachine: (ha-652395) DBG |   
	I0802 17:43:27.622603   23378 main.go:141] libmachine: (ha-652395) DBG | </network>
	I0802 17:43:27.622656   23378 main.go:141] libmachine: (ha-652395) DBG | 
	I0802 17:43:27.627331   23378 main.go:141] libmachine: (ha-652395) DBG | trying to create private KVM network mk-ha-652395 192.168.39.0/24...
	I0802 17:43:27.693211   23378 main.go:141] libmachine: (ha-652395) DBG | private KVM network mk-ha-652395 192.168.39.0/24 created
	I0802 17:43:27.693246   23378 main.go:141] libmachine: (ha-652395) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395 ...
	I0802 17:43:27.693260   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:27.693209   23401 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:43:27.693269   23378 main.go:141] libmachine: (ha-652395) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 17:43:27.693355   23378 main.go:141] libmachine: (ha-652395) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 17:43:27.936362   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:27.936220   23401 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa...
	I0802 17:43:28.110545   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:28.110410   23401 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/ha-652395.rawdisk...
	I0802 17:43:28.110582   23378 main.go:141] libmachine: (ha-652395) DBG | Writing magic tar header
	I0802 17:43:28.110603   23378 main.go:141] libmachine: (ha-652395) DBG | Writing SSH key tar header
	I0802 17:43:28.110615   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:28.110557   23401 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395 ...
	I0802 17:43:28.110702   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395
	I0802 17:43:28.110739   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 17:43:28.110773   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:43:28.110801   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395 (perms=drwx------)
	I0802 17:43:28.110819   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 17:43:28.110838   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 17:43:28.110852   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 17:43:28.110863   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 17:43:28.110881   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 17:43:28.110894   23378 main.go:141] libmachine: (ha-652395) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 17:43:28.110907   23378 main.go:141] libmachine: (ha-652395) Creating domain...
	I0802 17:43:28.110983   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 17:43:28.111025   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home/jenkins
	I0802 17:43:28.111042   23378 main.go:141] libmachine: (ha-652395) DBG | Checking permissions on dir: /home
	I0802 17:43:28.111053   23378 main.go:141] libmachine: (ha-652395) DBG | Skipping /home - not owner
	I0802 17:43:28.111908   23378 main.go:141] libmachine: (ha-652395) define libvirt domain using xml: 
	I0802 17:43:28.111926   23378 main.go:141] libmachine: (ha-652395) <domain type='kvm'>
	I0802 17:43:28.111936   23378 main.go:141] libmachine: (ha-652395)   <name>ha-652395</name>
	I0802 17:43:28.111944   23378 main.go:141] libmachine: (ha-652395)   <memory unit='MiB'>2200</memory>
	I0802 17:43:28.111953   23378 main.go:141] libmachine: (ha-652395)   <vcpu>2</vcpu>
	I0802 17:43:28.111960   23378 main.go:141] libmachine: (ha-652395)   <features>
	I0802 17:43:28.111968   23378 main.go:141] libmachine: (ha-652395)     <acpi/>
	I0802 17:43:28.111975   23378 main.go:141] libmachine: (ha-652395)     <apic/>
	I0802 17:43:28.111983   23378 main.go:141] libmachine: (ha-652395)     <pae/>
	I0802 17:43:28.112001   23378 main.go:141] libmachine: (ha-652395)     
	I0802 17:43:28.112010   23378 main.go:141] libmachine: (ha-652395)   </features>
	I0802 17:43:28.112019   23378 main.go:141] libmachine: (ha-652395)   <cpu mode='host-passthrough'>
	I0802 17:43:28.112028   23378 main.go:141] libmachine: (ha-652395)   
	I0802 17:43:28.112035   23378 main.go:141] libmachine: (ha-652395)   </cpu>
	I0802 17:43:28.112044   23378 main.go:141] libmachine: (ha-652395)   <os>
	I0802 17:43:28.112050   23378 main.go:141] libmachine: (ha-652395)     <type>hvm</type>
	I0802 17:43:28.112056   23378 main.go:141] libmachine: (ha-652395)     <boot dev='cdrom'/>
	I0802 17:43:28.112063   23378 main.go:141] libmachine: (ha-652395)     <boot dev='hd'/>
	I0802 17:43:28.112071   23378 main.go:141] libmachine: (ha-652395)     <bootmenu enable='no'/>
	I0802 17:43:28.112078   23378 main.go:141] libmachine: (ha-652395)   </os>
	I0802 17:43:28.112087   23378 main.go:141] libmachine: (ha-652395)   <devices>
	I0802 17:43:28.112102   23378 main.go:141] libmachine: (ha-652395)     <disk type='file' device='cdrom'>
	I0802 17:43:28.112115   23378 main.go:141] libmachine: (ha-652395)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/boot2docker.iso'/>
	I0802 17:43:28.112127   23378 main.go:141] libmachine: (ha-652395)       <target dev='hdc' bus='scsi'/>
	I0802 17:43:28.112135   23378 main.go:141] libmachine: (ha-652395)       <readonly/>
	I0802 17:43:28.112140   23378 main.go:141] libmachine: (ha-652395)     </disk>
	I0802 17:43:28.112147   23378 main.go:141] libmachine: (ha-652395)     <disk type='file' device='disk'>
	I0802 17:43:28.112158   23378 main.go:141] libmachine: (ha-652395)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 17:43:28.112179   23378 main.go:141] libmachine: (ha-652395)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/ha-652395.rawdisk'/>
	I0802 17:43:28.112195   23378 main.go:141] libmachine: (ha-652395)       <target dev='hda' bus='virtio'/>
	I0802 17:43:28.112204   23378 main.go:141] libmachine: (ha-652395)     </disk>
	I0802 17:43:28.112214   23378 main.go:141] libmachine: (ha-652395)     <interface type='network'>
	I0802 17:43:28.112223   23378 main.go:141] libmachine: (ha-652395)       <source network='mk-ha-652395'/>
	I0802 17:43:28.112229   23378 main.go:141] libmachine: (ha-652395)       <model type='virtio'/>
	I0802 17:43:28.112237   23378 main.go:141] libmachine: (ha-652395)     </interface>
	I0802 17:43:28.112248   23378 main.go:141] libmachine: (ha-652395)     <interface type='network'>
	I0802 17:43:28.112260   23378 main.go:141] libmachine: (ha-652395)       <source network='default'/>
	I0802 17:43:28.112273   23378 main.go:141] libmachine: (ha-652395)       <model type='virtio'/>
	I0802 17:43:28.112294   23378 main.go:141] libmachine: (ha-652395)     </interface>
	I0802 17:43:28.112304   23378 main.go:141] libmachine: (ha-652395)     <serial type='pty'>
	I0802 17:43:28.112313   23378 main.go:141] libmachine: (ha-652395)       <target port='0'/>
	I0802 17:43:28.112320   23378 main.go:141] libmachine: (ha-652395)     </serial>
	I0802 17:43:28.112331   23378 main.go:141] libmachine: (ha-652395)     <console type='pty'>
	I0802 17:43:28.112346   23378 main.go:141] libmachine: (ha-652395)       <target type='serial' port='0'/>
	I0802 17:43:28.112364   23378 main.go:141] libmachine: (ha-652395)     </console>
	I0802 17:43:28.112374   23378 main.go:141] libmachine: (ha-652395)     <rng model='virtio'>
	I0802 17:43:28.112386   23378 main.go:141] libmachine: (ha-652395)       <backend model='random'>/dev/random</backend>
	I0802 17:43:28.112395   23378 main.go:141] libmachine: (ha-652395)     </rng>
	I0802 17:43:28.112402   23378 main.go:141] libmachine: (ha-652395)     
	I0802 17:43:28.112410   23378 main.go:141] libmachine: (ha-652395)     
	I0802 17:43:28.112447   23378 main.go:141] libmachine: (ha-652395)   </devices>
	I0802 17:43:28.112471   23378 main.go:141] libmachine: (ha-652395) </domain>
	I0802 17:43:28.112484   23378 main.go:141] libmachine: (ha-652395) 
	I0802 17:43:28.116658   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ed:8e:1b in network default
	I0802 17:43:28.117252   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:28.117265   23378 main.go:141] libmachine: (ha-652395) Ensuring networks are active...
	I0802 17:43:28.117952   23378 main.go:141] libmachine: (ha-652395) Ensuring network default is active
	I0802 17:43:28.118277   23378 main.go:141] libmachine: (ha-652395) Ensuring network mk-ha-652395 is active
	I0802 17:43:28.118803   23378 main.go:141] libmachine: (ha-652395) Getting domain xml...
	I0802 17:43:28.120598   23378 main.go:141] libmachine: (ha-652395) Creating domain...
	I0802 17:43:29.304293   23378 main.go:141] libmachine: (ha-652395) Waiting to get IP...
	I0802 17:43:29.305021   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:29.305389   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:29.305417   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:29.305371   23401 retry.go:31] will retry after 206.437797ms: waiting for machine to come up
	I0802 17:43:29.513790   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:29.514187   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:29.514209   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:29.514150   23401 retry.go:31] will retry after 317.949439ms: waiting for machine to come up
	I0802 17:43:29.833691   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:29.834084   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:29.834127   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:29.834036   23401 retry.go:31] will retry after 296.41332ms: waiting for machine to come up
	I0802 17:43:30.132447   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:30.132882   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:30.132909   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:30.132820   23401 retry.go:31] will retry after 578.802992ms: waiting for machine to come up
	I0802 17:43:30.713751   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:30.714194   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:30.714225   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:30.714143   23401 retry.go:31] will retry after 541.137947ms: waiting for machine to come up
	I0802 17:43:31.256734   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:31.257148   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:31.257166   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:31.257112   23401 retry.go:31] will retry after 868.454467ms: waiting for machine to come up
	I0802 17:43:32.127061   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:32.127448   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:32.127479   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:32.127407   23401 retry.go:31] will retry after 957.120594ms: waiting for machine to come up
	I0802 17:43:33.086307   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:33.086703   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:33.086732   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:33.086632   23401 retry.go:31] will retry after 950.640972ms: waiting for machine to come up
	I0802 17:43:34.038690   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:34.039181   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:34.039204   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:34.039131   23401 retry.go:31] will retry after 1.174050877s: waiting for machine to come up
	I0802 17:43:35.215420   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:35.215962   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:35.215990   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:35.215910   23401 retry.go:31] will retry after 2.321948842s: waiting for machine to come up
	I0802 17:43:37.540307   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:37.540802   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:37.540830   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:37.540758   23401 retry.go:31] will retry after 2.138795762s: waiting for machine to come up
	I0802 17:43:39.682424   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:39.682734   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:39.682756   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:39.682704   23401 retry.go:31] will retry after 3.350234739s: waiting for machine to come up
	I0802 17:43:43.034379   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:43.034761   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find current IP address of domain ha-652395 in network mk-ha-652395
	I0802 17:43:43.034786   23378 main.go:141] libmachine: (ha-652395) DBG | I0802 17:43:43.034714   23401 retry.go:31] will retry after 4.438592489s: waiting for machine to come up
	I0802 17:43:47.476154   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.476553   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has current primary IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.476575   23378 main.go:141] libmachine: (ha-652395) Found IP for machine: 192.168.39.210
	I0802 17:43:47.476586   23378 main.go:141] libmachine: (ha-652395) Reserving static IP address...
	I0802 17:43:47.476910   23378 main.go:141] libmachine: (ha-652395) DBG | unable to find host DHCP lease matching {name: "ha-652395", mac: "52:54:00:ae:3a:9a", ip: "192.168.39.210"} in network mk-ha-652395
	I0802 17:43:47.546729   23378 main.go:141] libmachine: (ha-652395) DBG | Getting to WaitForSSH function...
	I0802 17:43:47.546784   23378 main.go:141] libmachine: (ha-652395) Reserved static IP address: 192.168.39.210
	I0802 17:43:47.546800   23378 main.go:141] libmachine: (ha-652395) Waiting for SSH to be available...
	I0802 17:43:47.549024   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.549350   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:47.549394   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.549467   23378 main.go:141] libmachine: (ha-652395) DBG | Using SSH client type: external
	I0802 17:43:47.549509   23378 main.go:141] libmachine: (ha-652395) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa (-rw-------)
	I0802 17:43:47.549536   23378 main.go:141] libmachine: (ha-652395) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 17:43:47.549558   23378 main.go:141] libmachine: (ha-652395) DBG | About to run SSH command:
	I0802 17:43:47.549572   23378 main.go:141] libmachine: (ha-652395) DBG | exit 0
	I0802 17:43:47.674982   23378 main.go:141] libmachine: (ha-652395) DBG | SSH cmd err, output: <nil>: 
	I0802 17:43:47.675260   23378 main.go:141] libmachine: (ha-652395) KVM machine creation complete!
	I0802 17:43:47.675619   23378 main.go:141] libmachine: (ha-652395) Calling .GetConfigRaw
	I0802 17:43:47.676203   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:47.676379   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:47.676547   23378 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 17:43:47.676564   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:43:47.677795   23378 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 17:43:47.677810   23378 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 17:43:47.677818   23378 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 17:43:47.677827   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:47.680082   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.680411   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:47.680437   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.680572   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:47.680735   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.680838   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.680931   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:47.681070   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:47.681318   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:47.681334   23378 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 17:43:47.786185   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:43:47.786206   23378 main.go:141] libmachine: Detecting the provisioner...
	I0802 17:43:47.786214   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:47.788979   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.789319   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:47.789345   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.789463   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:47.789645   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.789796   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.789900   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:47.790055   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:47.790274   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:47.790290   23378 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 17:43:47.895389   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 17:43:47.895462   23378 main.go:141] libmachine: found compatible host: buildroot
	I0802 17:43:47.895472   23378 main.go:141] libmachine: Provisioning with buildroot...
	I0802 17:43:47.895483   23378 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:43:47.895777   23378 buildroot.go:166] provisioning hostname "ha-652395"
	I0802 17:43:47.895801   23378 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:43:47.895976   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:47.898234   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.898534   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:47.898558   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:47.898698   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:47.898911   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.899028   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:47.899189   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:47.899346   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:47.899518   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:47.899530   23378 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-652395 && echo "ha-652395" | sudo tee /etc/hostname
	I0802 17:43:48.016012   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-652395
	
	I0802 17:43:48.016041   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.018712   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.019181   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.019211   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.019353   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.019529   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.019681   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.019837   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.020018   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:48.020223   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:48.020241   23378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-652395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-652395/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-652395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 17:43:48.135041   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:43:48.135070   23378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 17:43:48.135126   23378 buildroot.go:174] setting up certificates
	I0802 17:43:48.135139   23378 provision.go:84] configureAuth start
	I0802 17:43:48.135150   23378 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:43:48.135417   23378 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:43:48.138137   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.138480   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.138512   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.138649   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.140762   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.141045   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.141069   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.141206   23378 provision.go:143] copyHostCerts
	I0802 17:43:48.141236   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:43:48.141275   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 17:43:48.141284   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:43:48.141346   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 17:43:48.141429   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:43:48.141447   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 17:43:48.141462   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:43:48.141489   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 17:43:48.141531   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:43:48.141548   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 17:43:48.141554   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:43:48.141588   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 17:43:48.141634   23378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.ha-652395 san=[127.0.0.1 192.168.39.210 ha-652395 localhost minikube]
	I0802 17:43:48.239558   23378 provision.go:177] copyRemoteCerts
	I0802 17:43:48.239612   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 17:43:48.239635   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.242457   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.242774   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.242799   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.242926   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.243133   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.243299   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.243417   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:43:48.324685   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0802 17:43:48.324749   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 17:43:48.346222   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0802 17:43:48.346302   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 17:43:48.367321   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0802 17:43:48.367402   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0802 17:43:48.388695   23378 provision.go:87] duration metric: took 253.541137ms to configureAuth
	I0802 17:43:48.388723   23378 buildroot.go:189] setting minikube options for container-runtime
	I0802 17:43:48.388930   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:43:48.389017   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.391564   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.391885   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.391913   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.392056   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.392251   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.392433   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.392570   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.392709   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:48.392865   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:48.392883   23378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 17:43:48.645388   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 17:43:48.645420   23378 main.go:141] libmachine: Checking connection to Docker...
	I0802 17:43:48.645430   23378 main.go:141] libmachine: (ha-652395) Calling .GetURL
	I0802 17:43:48.646630   23378 main.go:141] libmachine: (ha-652395) DBG | Using libvirt version 6000000
	I0802 17:43:48.648475   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.648797   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.648817   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.649029   23378 main.go:141] libmachine: Docker is up and running!
	I0802 17:43:48.649041   23378 main.go:141] libmachine: Reticulating splines...
	I0802 17:43:48.649047   23378 client.go:171] duration metric: took 21.029683702s to LocalClient.Create
	I0802 17:43:48.649079   23378 start.go:167] duration metric: took 21.029733945s to libmachine.API.Create "ha-652395"
	I0802 17:43:48.649088   23378 start.go:293] postStartSetup for "ha-652395" (driver="kvm2")
	I0802 17:43:48.649097   23378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 17:43:48.649110   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:48.649321   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 17:43:48.649360   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.651633   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.651945   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.651969   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.652118   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.652345   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.652548   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.652713   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:43:48.733227   23378 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 17:43:48.736973   23378 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 17:43:48.736994   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 17:43:48.737050   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 17:43:48.737115   23378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 17:43:48.737128   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /etc/ssl/certs/125472.pem
	I0802 17:43:48.737210   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 17:43:48.746047   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:43:48.767307   23378 start.go:296] duration metric: took 118.189518ms for postStartSetup
	I0802 17:43:48.767349   23378 main.go:141] libmachine: (ha-652395) Calling .GetConfigRaw
	I0802 17:43:48.767931   23378 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:43:48.770145   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.770431   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.770470   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.770687   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:43:48.770851   23378 start.go:128] duration metric: took 21.168914849s to createHost
	I0802 17:43:48.770870   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.772913   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.773160   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.773190   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.773352   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.773510   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.773628   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.773838   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.773954   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:43:48.774126   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:43:48.774135   23378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 17:43:48.879555   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722620628.854031292
	
	I0802 17:43:48.879578   23378 fix.go:216] guest clock: 1722620628.854031292
	I0802 17:43:48.879588   23378 fix.go:229] Guest: 2024-08-02 17:43:48.854031292 +0000 UTC Remote: 2024-08-02 17:43:48.770861378 +0000 UTC m=+21.272573656 (delta=83.169914ms)
	I0802 17:43:48.879631   23378 fix.go:200] guest clock delta is within tolerance: 83.169914ms
	I0802 17:43:48.879638   23378 start.go:83] releasing machines lock for "ha-652395", held for 21.277774233s
	I0802 17:43:48.879658   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:48.879906   23378 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:43:48.882158   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.882466   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.882484   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.882693   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:48.883190   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:48.883352   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:43:48.883448   23378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 17:43:48.883480   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.883535   23378 ssh_runner.go:195] Run: cat /version.json
	I0802 17:43:48.883558   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:43:48.885979   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.886112   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.886327   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.886357   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.886453   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:48.886468   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.886489   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:48.886679   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.886695   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:43:48.886858   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.886863   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:43:48.887005   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:43:48.886996   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:43:48.887146   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:43:48.963670   23378 ssh_runner.go:195] Run: systemctl --version
	I0802 17:43:49.000362   23378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 17:43:49.153351   23378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 17:43:49.159630   23378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 17:43:49.159690   23378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 17:43:49.174393   23378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 17:43:49.174419   23378 start.go:495] detecting cgroup driver to use...
	I0802 17:43:49.174485   23378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 17:43:49.189549   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:43:49.202460   23378 docker.go:217] disabling cri-docker service (if available) ...
	I0802 17:43:49.202510   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 17:43:49.216121   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 17:43:49.229759   23378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 17:43:49.342217   23378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 17:43:49.477112   23378 docker.go:233] disabling docker service ...
	I0802 17:43:49.477177   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 17:43:49.490688   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 17:43:49.502398   23378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 17:43:49.638741   23378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 17:43:49.747840   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 17:43:49.760987   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:43:49.777504   23378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 17:43:49.777559   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.786762   23378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 17:43:49.786828   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.796125   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.805267   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.814132   23378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 17:43:49.823601   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.832591   23378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.847883   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:43:49.857095   23378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 17:43:49.865698   23378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 17:43:49.865769   23378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 17:43:49.877492   23378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 17:43:49.887087   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:43:49.990294   23378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 17:43:50.117171   23378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 17:43:50.117248   23378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 17:43:50.121957   23378 start.go:563] Will wait 60s for crictl version
	I0802 17:43:50.121992   23378 ssh_runner.go:195] Run: which crictl
	I0802 17:43:50.125194   23378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 17:43:50.161936   23378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 17:43:50.162018   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:43:50.188078   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:43:50.222165   23378 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 17:43:50.223314   23378 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:43:50.225669   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:50.225973   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:43:50.226014   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:43:50.226182   23378 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 17:43:50.230075   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:43:50.242064   23378 kubeadm.go:883] updating cluster {Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 17:43:50.242158   23378 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:43:50.242222   23378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:43:50.271773   23378 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 17:43:50.271828   23378 ssh_runner.go:195] Run: which lz4
	I0802 17:43:50.275129   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0802 17:43:50.275210   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 17:43:50.278906   23378 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 17:43:50.278938   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 17:43:51.475425   23378 crio.go:462] duration metric: took 1.200229686s to copy over tarball
	I0802 17:43:51.475504   23378 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 17:43:53.541418   23378 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.065866585s)
	I0802 17:43:53.541456   23378 crio.go:469] duration metric: took 2.065994563s to extract the tarball
	I0802 17:43:53.541466   23378 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 17:43:53.578000   23378 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:43:53.619614   23378 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 17:43:53.619638   23378 cache_images.go:84] Images are preloaded, skipping loading
	I0802 17:43:53.619647   23378 kubeadm.go:934] updating node { 192.168.39.210 8443 v1.30.3 crio true true} ...
	I0802 17:43:53.619781   23378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-652395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 17:43:53.619863   23378 ssh_runner.go:195] Run: crio config
	I0802 17:43:53.667999   23378 cni.go:84] Creating CNI manager for ""
	I0802 17:43:53.668024   23378 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0802 17:43:53.668034   23378 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 17:43:53.668057   23378 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-652395 NodeName:ha-652395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 17:43:53.668221   23378 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-652395"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 17:43:53.668249   23378 kube-vip.go:115] generating kube-vip config ...
	I0802 17:43:53.668309   23378 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0802 17:43:53.683501   23378 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0802 17:43:53.683641   23378 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0802 17:43:53.683724   23378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 17:43:53.692904   23378 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 17:43:53.692974   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0802 17:43:53.701414   23378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0802 17:43:53.716312   23378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 17:43:53.730577   23378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0802 17:43:53.745247   23378 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0802 17:43:53.760126   23378 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0802 17:43:53.763517   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:43:53.774170   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:43:53.889085   23378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:43:53.905244   23378 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395 for IP: 192.168.39.210
	I0802 17:43:53.905264   23378 certs.go:194] generating shared ca certs ...
	I0802 17:43:53.905288   23378 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:53.905446   23378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 17:43:53.905482   23378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 17:43:53.905491   23378 certs.go:256] generating profile certs ...
	I0802 17:43:53.905539   23378 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key
	I0802 17:43:53.905552   23378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt with IP's: []
	I0802 17:43:54.053414   23378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt ...
	I0802 17:43:54.053445   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt: {Name:mk314022aeb5eeb0a845d5e8cd46286bc9907522 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.053633   23378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key ...
	I0802 17:43:54.053646   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key: {Name:mk5b437e61241eb8c16ba4e9fbfd32eed2d1a7d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.053733   23378 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.1fd73d6c
	I0802 17:43:54.053750   23378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.1fd73d6c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.210 192.168.39.254]
	I0802 17:43:54.304477   23378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.1fd73d6c ...
	I0802 17:43:54.304511   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.1fd73d6c: {Name:mkcd4a89a2871e6bdf2fd9eb443ed97cb6069758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.304686   23378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.1fd73d6c ...
	I0802 17:43:54.304700   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.1fd73d6c: {Name:mkbaf0ce6457d1d137e82c654b0f103e2bb7dffa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.304777   23378 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.1fd73d6c -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt
	I0802 17:43:54.304874   23378 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.1fd73d6c -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key
	I0802 17:43:54.304938   23378 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key
	I0802 17:43:54.304955   23378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt with IP's: []
	I0802 17:43:54.367003   23378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt ...
	I0802 17:43:54.367035   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt: {Name:mk9b147340d68f0948aa055cf8f58f42b1889b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.367225   23378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key ...
	I0802 17:43:54.367239   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key: {Name:mke9487a3c9b3a3f630f52ed701c26cf34a31157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:43:54.367320   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0802 17:43:54.367341   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0802 17:43:54.367355   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0802 17:43:54.367374   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0802 17:43:54.367389   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0802 17:43:54.367405   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0802 17:43:54.367420   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0802 17:43:54.367435   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0802 17:43:54.367492   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 17:43:54.367529   23378 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 17:43:54.367541   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 17:43:54.367567   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 17:43:54.367592   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 17:43:54.367616   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 17:43:54.367668   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:43:54.367698   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem -> /usr/share/ca-certificates/12547.pem
	I0802 17:43:54.367715   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /usr/share/ca-certificates/125472.pem
	I0802 17:43:54.367730   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:43:54.368234   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 17:43:54.392172   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 17:43:54.413471   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 17:43:54.434644   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 17:43:54.456366   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0802 17:43:54.478038   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 17:43:54.499093   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 17:43:54.520439   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 17:43:54.542074   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 17:43:54.563439   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 17:43:54.584546   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 17:43:54.606302   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 17:43:54.621748   23378 ssh_runner.go:195] Run: openssl version
	I0802 17:43:54.627337   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 17:43:54.637187   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 17:43:54.641155   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 17:43:54.641213   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 17:43:54.646744   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 17:43:54.656795   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 17:43:54.669041   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 17:43:54.673199   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 17:43:54.673270   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 17:43:54.686479   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 17:43:54.710437   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 17:43:54.721397   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:43:54.728696   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:43:54.728762   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:43:54.735208   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 17:43:54.745472   23378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 17:43:54.749220   23378 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 17:43:54.749277   23378 kubeadm.go:392] StartCluster: {Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:43:54.749343   23378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 17:43:54.749398   23378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 17:43:54.788152   23378 cri.go:89] found id: ""
	I0802 17:43:54.788236   23378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 17:43:54.797773   23378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 17:43:54.806467   23378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 17:43:54.815266   23378 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 17:43:54.815284   23378 kubeadm.go:157] found existing configuration files:
	
	I0802 17:43:54.815332   23378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 17:43:54.823631   23378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 17:43:54.823698   23378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 17:43:54.832510   23378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 17:43:54.841157   23378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 17:43:54.841221   23378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 17:43:54.850423   23378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 17:43:54.858886   23378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 17:43:54.858943   23378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 17:43:54.867858   23378 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 17:43:54.876271   23378 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 17:43:54.876333   23378 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 17:43:54.885019   23378 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 17:43:54.980098   23378 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 17:43:54.980156   23378 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 17:43:55.093176   23378 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 17:43:55.093342   23378 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 17:43:55.093476   23378 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 17:43:55.275755   23378 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 17:43:55.278933   23378 out.go:204]   - Generating certificates and keys ...
	I0802 17:43:55.279155   23378 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 17:43:55.279710   23378 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 17:43:55.405849   23378 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 17:43:55.560710   23378 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 17:43:55.626835   23378 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 17:43:55.710955   23378 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 17:43:55.808965   23378 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 17:43:55.809202   23378 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-652395 localhost] and IPs [192.168.39.210 127.0.0.1 ::1]
	I0802 17:43:56.078095   23378 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 17:43:56.078366   23378 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-652395 localhost] and IPs [192.168.39.210 127.0.0.1 ::1]
	I0802 17:43:56.234541   23378 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 17:43:56.413241   23378 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 17:43:56.651554   23378 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 17:43:56.651770   23378 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 17:43:56.760727   23378 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 17:43:56.809425   23378 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 17:43:57.166254   23378 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 17:43:57.345558   23378 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 17:43:57.523845   23378 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 17:43:57.524396   23378 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 17:43:57.527324   23378 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 17:43:57.529067   23378 out.go:204]   - Booting up control plane ...
	I0802 17:43:57.529164   23378 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 17:43:57.529258   23378 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 17:43:57.529451   23378 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 17:43:57.546637   23378 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 17:43:57.547543   23378 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 17:43:57.547585   23378 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 17:43:57.681126   23378 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 17:43:57.681231   23378 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 17:43:58.182680   23378 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.628438ms
	I0802 17:43:58.182774   23378 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 17:44:04.086599   23378 kubeadm.go:310] [api-check] The API server is healthy after 5.9078985s
	I0802 17:44:04.099166   23378 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 17:44:04.113927   23378 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 17:44:04.141168   23378 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 17:44:04.141377   23378 kubeadm.go:310] [mark-control-plane] Marking the node ha-652395 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 17:44:04.158760   23378 kubeadm.go:310] [bootstrap-token] Using token: gh7ckt.nhzg9mtgbeyyrv9u
	I0802 17:44:04.160217   23378 out.go:204]   - Configuring RBAC rules ...
	I0802 17:44:04.160374   23378 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 17:44:04.164771   23378 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 17:44:04.180573   23378 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 17:44:04.184329   23378 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 17:44:04.188291   23378 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 17:44:04.193124   23378 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 17:44:04.493327   23378 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 17:44:04.936050   23378 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 17:44:05.494536   23378 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 17:44:05.494565   23378 kubeadm.go:310] 
	I0802 17:44:05.494644   23378 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 17:44:05.494651   23378 kubeadm.go:310] 
	I0802 17:44:05.494752   23378 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 17:44:05.494762   23378 kubeadm.go:310] 
	I0802 17:44:05.494817   23378 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 17:44:05.494899   23378 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 17:44:05.494967   23378 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 17:44:05.494976   23378 kubeadm.go:310] 
	I0802 17:44:05.495049   23378 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 17:44:05.495059   23378 kubeadm.go:310] 
	I0802 17:44:05.495137   23378 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 17:44:05.495147   23378 kubeadm.go:310] 
	I0802 17:44:05.495217   23378 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 17:44:05.495341   23378 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 17:44:05.495413   23378 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 17:44:05.495421   23378 kubeadm.go:310] 
	I0802 17:44:05.495493   23378 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 17:44:05.495562   23378 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 17:44:05.495568   23378 kubeadm.go:310] 
	I0802 17:44:05.495635   23378 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token gh7ckt.nhzg9mtgbeyyrv9u \
	I0802 17:44:05.495724   23378 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 17:44:05.495747   23378 kubeadm.go:310] 	--control-plane 
	I0802 17:44:05.495753   23378 kubeadm.go:310] 
	I0802 17:44:05.495822   23378 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 17:44:05.495829   23378 kubeadm.go:310] 
	I0802 17:44:05.495894   23378 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gh7ckt.nhzg9mtgbeyyrv9u \
	I0802 17:44:05.496028   23378 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 17:44:05.496466   23378 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 17:44:05.496493   23378 cni.go:84] Creating CNI manager for ""
	I0802 17:44:05.496503   23378 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0802 17:44:05.498369   23378 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0802 17:44:05.499806   23378 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0802 17:44:05.505146   23378 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0802 17:44:05.505164   23378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0802 17:44:05.523570   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0802 17:44:05.957867   23378 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 17:44:05.958009   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:05.958020   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-652395 minikube.k8s.io/updated_at=2024_08_02T17_44_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=ha-652395 minikube.k8s.io/primary=true
	I0802 17:44:06.021205   23378 ops.go:34] apiserver oom_adj: -16
	I0802 17:44:06.127885   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:06.628760   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:07.128391   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:07.627986   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:08.128127   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:08.627976   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:09.128814   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:09.628646   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:10.128361   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:10.628952   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:11.128198   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:11.628805   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:12.128917   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:12.628721   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:13.128762   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:13.628830   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:14.128121   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:14.628558   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:15.128510   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:15.628063   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:16.128965   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:16.628500   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:17.128859   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 17:44:17.250527   23378 kubeadm.go:1113] duration metric: took 11.292568167s to wait for elevateKubeSystemPrivileges
	I0802 17:44:17.250570   23378 kubeadm.go:394] duration metric: took 22.501297226s to StartCluster
	I0802 17:44:17.250594   23378 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:44:17.250681   23378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:44:17.251618   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:44:17.251841   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 17:44:17.251852   23378 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:44:17.251877   23378 start.go:241] waiting for startup goroutines ...
	I0802 17:44:17.251889   23378 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 17:44:17.251950   23378 addons.go:69] Setting storage-provisioner=true in profile "ha-652395"
	I0802 17:44:17.251958   23378 addons.go:69] Setting default-storageclass=true in profile "ha-652395"
	I0802 17:44:17.251978   23378 addons.go:234] Setting addon storage-provisioner=true in "ha-652395"
	I0802 17:44:17.251996   23378 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-652395"
	I0802 17:44:17.252008   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:44:17.252115   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:44:17.252448   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:17.252481   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:17.252448   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:17.252601   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:17.267843   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0802 17:44:17.267846   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43499
	I0802 17:44:17.268349   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:17.268399   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:17.268908   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:17.268927   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:17.268911   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:17.268991   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:17.269276   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:17.269321   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:17.269538   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:44:17.269836   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:17.269871   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:17.271906   23378 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:44:17.272284   23378 kapi.go:59] client config for ha-652395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key", CAFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 17:44:17.272869   23378 cert_rotation.go:137] Starting client certificate rotation controller
	I0802 17:44:17.273070   23378 addons.go:234] Setting addon default-storageclass=true in "ha-652395"
	I0802 17:44:17.273113   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:44:17.273513   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:17.273543   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:17.285737   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42325
	I0802 17:44:17.286210   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:17.286750   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:17.286776   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:17.287074   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:17.287266   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:44:17.287801   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0802 17:44:17.288165   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:17.288636   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:17.288703   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:17.288960   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:44:17.289205   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:17.289845   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:17.289869   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:17.291346   23378 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 17:44:17.292671   23378 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 17:44:17.292703   23378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 17:44:17.292726   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:44:17.296021   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:17.296529   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:44:17.296603   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:17.296888   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:44:17.297058   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:44:17.297225   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:44:17.297386   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:44:17.305530   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0802 17:44:17.305939   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:17.306438   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:17.306456   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:17.306783   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:17.307000   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:44:17.308614   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:44:17.308824   23378 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 17:44:17.308842   23378 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 17:44:17.308861   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:44:17.311528   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:17.312037   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:44:17.312073   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:17.312264   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:44:17.312431   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:44:17.312605   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:44:17.312738   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:44:17.367506   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 17:44:17.456500   23378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 17:44:17.469431   23378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 17:44:17.852598   23378 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0802 17:44:18.117178   23378 main.go:141] libmachine: Making call to close driver server
	I0802 17:44:18.117200   23378 main.go:141] libmachine: (ha-652395) Calling .Close
	I0802 17:44:18.117240   23378 main.go:141] libmachine: Making call to close driver server
	I0802 17:44:18.117260   23378 main.go:141] libmachine: (ha-652395) Calling .Close
	I0802 17:44:18.117480   23378 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:44:18.117497   23378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:44:18.117507   23378 main.go:141] libmachine: Making call to close driver server
	I0802 17:44:18.117517   23378 main.go:141] libmachine: (ha-652395) Calling .Close
	I0802 17:44:18.117525   23378 main.go:141] libmachine: (ha-652395) DBG | Closing plugin on server side
	I0802 17:44:18.117483   23378 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:44:18.117549   23378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:44:18.117561   23378 main.go:141] libmachine: Making call to close driver server
	I0802 17:44:18.117569   23378 main.go:141] libmachine: (ha-652395) Calling .Close
	I0802 17:44:18.119196   23378 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:44:18.119213   23378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:44:18.119219   23378 main.go:141] libmachine: (ha-652395) DBG | Closing plugin on server side
	I0802 17:44:18.119245   23378 main.go:141] libmachine: (ha-652395) DBG | Closing plugin on server side
	I0802 17:44:18.119260   23378 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:44:18.119283   23378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:44:18.119334   23378 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0802 17:44:18.119345   23378 round_trippers.go:469] Request Headers:
	I0802 17:44:18.119354   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:44:18.119362   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:44:18.129277   23378 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0802 17:44:18.129874   23378 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0802 17:44:18.129891   23378 round_trippers.go:469] Request Headers:
	I0802 17:44:18.129902   23378 round_trippers.go:473]     Content-Type: application/json
	I0802 17:44:18.129907   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:44:18.129911   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:44:18.132586   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:44:18.132811   23378 main.go:141] libmachine: Making call to close driver server
	I0802 17:44:18.132838   23378 main.go:141] libmachine: (ha-652395) Calling .Close
	I0802 17:44:18.133081   23378 main.go:141] libmachine: Successfully made call to close driver server
	I0802 17:44:18.133095   23378 main.go:141] libmachine: (ha-652395) DBG | Closing plugin on server side
	I0802 17:44:18.133105   23378 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 17:44:18.134947   23378 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0802 17:44:18.136166   23378 addons.go:510] duration metric: took 884.272175ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0802 17:44:18.136208   23378 start.go:246] waiting for cluster config update ...
	I0802 17:44:18.136222   23378 start.go:255] writing updated cluster config ...
	I0802 17:44:18.137724   23378 out.go:177] 
	I0802 17:44:18.139128   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:44:18.139205   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:44:18.140885   23378 out.go:177] * Starting "ha-652395-m02" control-plane node in "ha-652395" cluster
	I0802 17:44:18.142148   23378 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:44:18.142186   23378 cache.go:56] Caching tarball of preloaded images
	I0802 17:44:18.142277   23378 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 17:44:18.142319   23378 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 17:44:18.142418   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:44:18.142659   23378 start.go:360] acquireMachinesLock for ha-652395-m02: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 17:44:18.142710   23378 start.go:364] duration metric: took 29.12µs to acquireMachinesLock for "ha-652395-m02"
	I0802 17:44:18.142726   23378 start.go:93] Provisioning new machine with config: &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:44:18.142841   23378 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0802 17:44:18.145485   23378 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 17:44:18.145595   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:18.145631   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:18.159958   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37863
	I0802 17:44:18.160419   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:18.160899   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:18.160921   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:18.161237   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:18.161412   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetMachineName
	I0802 17:44:18.161552   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:18.161698   23378 start.go:159] libmachine.API.Create for "ha-652395" (driver="kvm2")
	I0802 17:44:18.161730   23378 client.go:168] LocalClient.Create starting
	I0802 17:44:18.161758   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 17:44:18.161786   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:44:18.161800   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:44:18.161846   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 17:44:18.161864   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:44:18.161885   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:44:18.161900   23378 main.go:141] libmachine: Running pre-create checks...
	I0802 17:44:18.161908   23378 main.go:141] libmachine: (ha-652395-m02) Calling .PreCreateCheck
	I0802 17:44:18.162043   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetConfigRaw
	I0802 17:44:18.162490   23378 main.go:141] libmachine: Creating machine...
	I0802 17:44:18.162503   23378 main.go:141] libmachine: (ha-652395-m02) Calling .Create
	I0802 17:44:18.162663   23378 main.go:141] libmachine: (ha-652395-m02) Creating KVM machine...
	I0802 17:44:18.163863   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found existing default KVM network
	I0802 17:44:18.164019   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found existing private KVM network mk-ha-652395
	I0802 17:44:18.164159   23378 main.go:141] libmachine: (ha-652395-m02) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02 ...
	I0802 17:44:18.164181   23378 main.go:141] libmachine: (ha-652395-m02) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 17:44:18.164260   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:18.164157   23763 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:44:18.164384   23378 main.go:141] libmachine: (ha-652395-m02) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 17:44:18.390136   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:18.390008   23763 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa...
	I0802 17:44:18.528332   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:18.528175   23763 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/ha-652395-m02.rawdisk...
	I0802 17:44:18.528368   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Writing magic tar header
	I0802 17:44:18.528425   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Writing SSH key tar header
	I0802 17:44:18.528456   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:18.528319   23763 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02 ...
	I0802 17:44:18.528473   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02
	I0802 17:44:18.528482   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 17:44:18.528500   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02 (perms=drwx------)
	I0802 17:44:18.528510   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 17:44:18.528517   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:44:18.528526   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 17:44:18.528535   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 17:44:18.528542   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 17:44:18.528552   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 17:44:18.528561   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 17:44:18.528569   23378 main.go:141] libmachine: (ha-652395-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 17:44:18.528575   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home/jenkins
	I0802 17:44:18.528585   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Checking permissions on dir: /home
	I0802 17:44:18.528592   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Skipping /home - not owner
	I0802 17:44:18.528620   23378 main.go:141] libmachine: (ha-652395-m02) Creating domain...
	I0802 17:44:18.529489   23378 main.go:141] libmachine: (ha-652395-m02) define libvirt domain using xml: 
	I0802 17:44:18.529509   23378 main.go:141] libmachine: (ha-652395-m02) <domain type='kvm'>
	I0802 17:44:18.529520   23378 main.go:141] libmachine: (ha-652395-m02)   <name>ha-652395-m02</name>
	I0802 17:44:18.529526   23378 main.go:141] libmachine: (ha-652395-m02)   <memory unit='MiB'>2200</memory>
	I0802 17:44:18.529534   23378 main.go:141] libmachine: (ha-652395-m02)   <vcpu>2</vcpu>
	I0802 17:44:18.529541   23378 main.go:141] libmachine: (ha-652395-m02)   <features>
	I0802 17:44:18.529549   23378 main.go:141] libmachine: (ha-652395-m02)     <acpi/>
	I0802 17:44:18.529564   23378 main.go:141] libmachine: (ha-652395-m02)     <apic/>
	I0802 17:44:18.529572   23378 main.go:141] libmachine: (ha-652395-m02)     <pae/>
	I0802 17:44:18.529580   23378 main.go:141] libmachine: (ha-652395-m02)     
	I0802 17:44:18.529592   23378 main.go:141] libmachine: (ha-652395-m02)   </features>
	I0802 17:44:18.529606   23378 main.go:141] libmachine: (ha-652395-m02)   <cpu mode='host-passthrough'>
	I0802 17:44:18.529617   23378 main.go:141] libmachine: (ha-652395-m02)   
	I0802 17:44:18.529625   23378 main.go:141] libmachine: (ha-652395-m02)   </cpu>
	I0802 17:44:18.529632   23378 main.go:141] libmachine: (ha-652395-m02)   <os>
	I0802 17:44:18.529640   23378 main.go:141] libmachine: (ha-652395-m02)     <type>hvm</type>
	I0802 17:44:18.529648   23378 main.go:141] libmachine: (ha-652395-m02)     <boot dev='cdrom'/>
	I0802 17:44:18.529658   23378 main.go:141] libmachine: (ha-652395-m02)     <boot dev='hd'/>
	I0802 17:44:18.529668   23378 main.go:141] libmachine: (ha-652395-m02)     <bootmenu enable='no'/>
	I0802 17:44:18.529681   23378 main.go:141] libmachine: (ha-652395-m02)   </os>
	I0802 17:44:18.529691   23378 main.go:141] libmachine: (ha-652395-m02)   <devices>
	I0802 17:44:18.529703   23378 main.go:141] libmachine: (ha-652395-m02)     <disk type='file' device='cdrom'>
	I0802 17:44:18.529719   23378 main.go:141] libmachine: (ha-652395-m02)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/boot2docker.iso'/>
	I0802 17:44:18.529730   23378 main.go:141] libmachine: (ha-652395-m02)       <target dev='hdc' bus='scsi'/>
	I0802 17:44:18.529737   23378 main.go:141] libmachine: (ha-652395-m02)       <readonly/>
	I0802 17:44:18.529747   23378 main.go:141] libmachine: (ha-652395-m02)     </disk>
	I0802 17:44:18.529768   23378 main.go:141] libmachine: (ha-652395-m02)     <disk type='file' device='disk'>
	I0802 17:44:18.529793   23378 main.go:141] libmachine: (ha-652395-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 17:44:18.529809   23378 main.go:141] libmachine: (ha-652395-m02)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/ha-652395-m02.rawdisk'/>
	I0802 17:44:18.529821   23378 main.go:141] libmachine: (ha-652395-m02)       <target dev='hda' bus='virtio'/>
	I0802 17:44:18.529830   23378 main.go:141] libmachine: (ha-652395-m02)     </disk>
	I0802 17:44:18.529835   23378 main.go:141] libmachine: (ha-652395-m02)     <interface type='network'>
	I0802 17:44:18.529841   23378 main.go:141] libmachine: (ha-652395-m02)       <source network='mk-ha-652395'/>
	I0802 17:44:18.529848   23378 main.go:141] libmachine: (ha-652395-m02)       <model type='virtio'/>
	I0802 17:44:18.529854   23378 main.go:141] libmachine: (ha-652395-m02)     </interface>
	I0802 17:44:18.529868   23378 main.go:141] libmachine: (ha-652395-m02)     <interface type='network'>
	I0802 17:44:18.529881   23378 main.go:141] libmachine: (ha-652395-m02)       <source network='default'/>
	I0802 17:44:18.529892   23378 main.go:141] libmachine: (ha-652395-m02)       <model type='virtio'/>
	I0802 17:44:18.529903   23378 main.go:141] libmachine: (ha-652395-m02)     </interface>
	I0802 17:44:18.529909   23378 main.go:141] libmachine: (ha-652395-m02)     <serial type='pty'>
	I0802 17:44:18.529915   23378 main.go:141] libmachine: (ha-652395-m02)       <target port='0'/>
	I0802 17:44:18.529921   23378 main.go:141] libmachine: (ha-652395-m02)     </serial>
	I0802 17:44:18.529929   23378 main.go:141] libmachine: (ha-652395-m02)     <console type='pty'>
	I0802 17:44:18.529940   23378 main.go:141] libmachine: (ha-652395-m02)       <target type='serial' port='0'/>
	I0802 17:44:18.529954   23378 main.go:141] libmachine: (ha-652395-m02)     </console>
	I0802 17:44:18.529968   23378 main.go:141] libmachine: (ha-652395-m02)     <rng model='virtio'>
	I0802 17:44:18.529979   23378 main.go:141] libmachine: (ha-652395-m02)       <backend model='random'>/dev/random</backend>
	I0802 17:44:18.529989   23378 main.go:141] libmachine: (ha-652395-m02)     </rng>
	I0802 17:44:18.529998   23378 main.go:141] libmachine: (ha-652395-m02)     
	I0802 17:44:18.530003   23378 main.go:141] libmachine: (ha-652395-m02)     
	I0802 17:44:18.530008   23378 main.go:141] libmachine: (ha-652395-m02)   </devices>
	I0802 17:44:18.530014   23378 main.go:141] libmachine: (ha-652395-m02) </domain>
	I0802 17:44:18.530021   23378 main.go:141] libmachine: (ha-652395-m02) 
	I0802 17:44:18.536563   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:02:98:f3 in network default
	I0802 17:44:18.537135   23378 main.go:141] libmachine: (ha-652395-m02) Ensuring networks are active...
	I0802 17:44:18.537153   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:18.537838   23378 main.go:141] libmachine: (ha-652395-m02) Ensuring network default is active
	I0802 17:44:18.538245   23378 main.go:141] libmachine: (ha-652395-m02) Ensuring network mk-ha-652395 is active
	I0802 17:44:18.538616   23378 main.go:141] libmachine: (ha-652395-m02) Getting domain xml...
	I0802 17:44:18.539291   23378 main.go:141] libmachine: (ha-652395-m02) Creating domain...
	I0802 17:44:19.736873   23378 main.go:141] libmachine: (ha-652395-m02) Waiting to get IP...
	I0802 17:44:19.737634   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:19.738084   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:19.738126   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:19.738068   23763 retry.go:31] will retry after 217.948043ms: waiting for machine to come up
	I0802 17:44:19.958844   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:19.959262   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:19.959291   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:19.959230   23763 retry.go:31] will retry after 326.796973ms: waiting for machine to come up
	I0802 17:44:20.287452   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:20.287947   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:20.287982   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:20.287894   23763 retry.go:31] will retry after 376.716008ms: waiting for machine to come up
	I0802 17:44:20.666405   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:20.666943   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:20.666973   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:20.666909   23763 retry.go:31] will retry after 564.174398ms: waiting for machine to come up
	I0802 17:44:21.232225   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:21.232677   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:21.232706   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:21.232640   23763 retry.go:31] will retry after 733.655034ms: waiting for machine to come up
	I0802 17:44:21.967411   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:21.967809   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:21.967830   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:21.967776   23763 retry.go:31] will retry after 665.784935ms: waiting for machine to come up
	I0802 17:44:22.634995   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:22.635613   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:22.635642   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:22.635569   23763 retry.go:31] will retry after 790.339868ms: waiting for machine to come up
	I0802 17:44:23.427950   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:23.428503   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:23.428530   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:23.428467   23763 retry.go:31] will retry after 968.769963ms: waiting for machine to come up
	I0802 17:44:24.398711   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:24.399081   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:24.399115   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:24.399042   23763 retry.go:31] will retry after 1.755457058s: waiting for machine to come up
	I0802 17:44:26.156831   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:26.157231   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:26.157260   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:26.157187   23763 retry.go:31] will retry after 2.231533101s: waiting for machine to come up
	I0802 17:44:28.390743   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:28.391237   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:28.391259   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:28.391157   23763 retry.go:31] will retry after 2.175447005s: waiting for machine to come up
	I0802 17:44:30.569368   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:30.569868   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:30.569898   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:30.569816   23763 retry.go:31] will retry after 3.609031806s: waiting for machine to come up
	I0802 17:44:34.179928   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:34.180339   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find current IP address of domain ha-652395-m02 in network mk-ha-652395
	I0802 17:44:34.180364   23378 main.go:141] libmachine: (ha-652395-m02) DBG | I0802 17:44:34.180295   23763 retry.go:31] will retry after 3.725193463s: waiting for machine to come up
	I0802 17:44:37.908271   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:37.908731   23378 main.go:141] libmachine: (ha-652395-m02) Found IP for machine: 192.168.39.220
	I0802 17:44:37.908756   23378 main.go:141] libmachine: (ha-652395-m02) Reserving static IP address...
	I0802 17:44:37.908765   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has current primary IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:37.909129   23378 main.go:141] libmachine: (ha-652395-m02) DBG | unable to find host DHCP lease matching {name: "ha-652395-m02", mac: "52:54:00:da:d8:1e", ip: "192.168.39.220"} in network mk-ha-652395
	I0802 17:44:37.981456   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Getting to WaitForSSH function...
	I0802 17:44:37.981491   23378 main.go:141] libmachine: (ha-652395-m02) Reserved static IP address: 192.168.39.220
	I0802 17:44:37.981507   23378 main.go:141] libmachine: (ha-652395-m02) Waiting for SSH to be available...
	I0802 17:44:37.984054   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:37.984437   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:37.984466   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:37.984606   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Using SSH client type: external
	I0802 17:44:37.984626   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa (-rw-------)
	I0802 17:44:37.984693   23378 main.go:141] libmachine: (ha-652395-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 17:44:37.984731   23378 main.go:141] libmachine: (ha-652395-m02) DBG | About to run SSH command:
	I0802 17:44:37.984750   23378 main.go:141] libmachine: (ha-652395-m02) DBG | exit 0
	I0802 17:44:38.107117   23378 main.go:141] libmachine: (ha-652395-m02) DBG | SSH cmd err, output: <nil>: 
	I0802 17:44:38.107368   23378 main.go:141] libmachine: (ha-652395-m02) KVM machine creation complete!
	I0802 17:44:38.107781   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetConfigRaw
	I0802 17:44:38.108358   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:38.108554   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:38.108723   23378 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 17:44:38.108741   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 17:44:38.109932   23378 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 17:44:38.109943   23378 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 17:44:38.109949   23378 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 17:44:38.109955   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.112060   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.112416   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.112445   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.112597   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.112786   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.112943   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.113070   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.113224   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:38.113437   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:38.113451   23378 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 17:44:38.214068   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:44:38.214118   23378 main.go:141] libmachine: Detecting the provisioner...
	I0802 17:44:38.214130   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.217033   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.217446   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.217469   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.217716   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.217933   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.218065   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.218187   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.218324   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:38.218495   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:38.218508   23378 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 17:44:38.319349   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 17:44:38.319439   23378 main.go:141] libmachine: found compatible host: buildroot
	I0802 17:44:38.319451   23378 main.go:141] libmachine: Provisioning with buildroot...
	I0802 17:44:38.319459   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetMachineName
	I0802 17:44:38.319784   23378 buildroot.go:166] provisioning hostname "ha-652395-m02"
	I0802 17:44:38.319806   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetMachineName
	I0802 17:44:38.319988   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.322329   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.322663   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.322698   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.322835   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.323023   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.323189   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.323360   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.323519   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:38.323701   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:38.323714   23378 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-652395-m02 && echo "ha-652395-m02" | sudo tee /etc/hostname
	I0802 17:44:38.436740   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-652395-m02
	
	I0802 17:44:38.436767   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.439340   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.439683   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.439704   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.439915   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.440058   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.440228   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.440357   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.440518   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:38.440679   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:38.440694   23378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-652395-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-652395-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-652395-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 17:44:38.551741   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:44:38.551770   23378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 17:44:38.551788   23378 buildroot.go:174] setting up certificates
	I0802 17:44:38.551800   23378 provision.go:84] configureAuth start
	I0802 17:44:38.551808   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetMachineName
	I0802 17:44:38.552063   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:44:38.554962   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.555316   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.555342   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.555517   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.557789   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.558146   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.558176   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.558317   23378 provision.go:143] copyHostCerts
	I0802 17:44:38.558347   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:44:38.558374   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 17:44:38.558383   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:44:38.558449   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 17:44:38.558516   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:44:38.558532   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 17:44:38.558539   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:44:38.558562   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 17:44:38.558604   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:44:38.558620   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 17:44:38.558625   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:44:38.558645   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 17:44:38.558693   23378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.ha-652395-m02 san=[127.0.0.1 192.168.39.220 ha-652395-m02 localhost minikube]
	I0802 17:44:38.671752   23378 provision.go:177] copyRemoteCerts
	I0802 17:44:38.671807   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 17:44:38.671831   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.674377   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.674746   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.674776   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.674955   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.675166   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.675320   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.675457   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	I0802 17:44:38.757096   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0802 17:44:38.757200   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 17:44:38.779767   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0802 17:44:38.779830   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0802 17:44:38.801698   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0802 17:44:38.801769   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 17:44:38.822909   23378 provision.go:87] duration metric: took 271.098404ms to configureAuth
	I0802 17:44:38.822936   23378 buildroot.go:189] setting minikube options for container-runtime
	I0802 17:44:38.823161   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:44:38.823242   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:38.825732   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.826166   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:38.826202   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:38.826372   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:38.826581   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.826796   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:38.826908   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:38.827087   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:38.827297   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:38.827312   23378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 17:44:39.088891   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 17:44:39.088922   23378 main.go:141] libmachine: Checking connection to Docker...
	I0802 17:44:39.088933   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetURL
	I0802 17:44:39.090300   23378 main.go:141] libmachine: (ha-652395-m02) DBG | Using libvirt version 6000000
	I0802 17:44:39.092491   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.092929   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.092956   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.093127   23378 main.go:141] libmachine: Docker is up and running!
	I0802 17:44:39.093142   23378 main.go:141] libmachine: Reticulating splines...
	I0802 17:44:39.093148   23378 client.go:171] duration metric: took 20.931409084s to LocalClient.Create
	I0802 17:44:39.093170   23378 start.go:167] duration metric: took 20.931472826s to libmachine.API.Create "ha-652395"
	I0802 17:44:39.093182   23378 start.go:293] postStartSetup for "ha-652395-m02" (driver="kvm2")
	I0802 17:44:39.093203   23378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 17:44:39.093232   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:39.093466   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 17:44:39.093502   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:39.095643   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.095927   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.095966   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.096065   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:39.096227   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:39.096422   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:39.096584   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	I0802 17:44:39.176707   23378 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 17:44:39.180614   23378 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 17:44:39.180641   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 17:44:39.180712   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 17:44:39.180804   23378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 17:44:39.180816   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /etc/ssl/certs/125472.pem
	I0802 17:44:39.180927   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 17:44:39.189414   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:44:39.212745   23378 start.go:296] duration metric: took 119.54014ms for postStartSetup
	I0802 17:44:39.212798   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetConfigRaw
	I0802 17:44:39.213390   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:44:39.215996   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.216331   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.216353   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.216579   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:44:39.216783   23378 start.go:128] duration metric: took 21.073923256s to createHost
	I0802 17:44:39.216813   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:39.218819   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.219124   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.219150   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.219276   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:39.219450   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:39.219614   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:39.219728   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:39.219909   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:44:39.220059   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0802 17:44:39.220069   23378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 17:44:39.319869   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722620679.291917014
	
	I0802 17:44:39.319888   23378 fix.go:216] guest clock: 1722620679.291917014
	I0802 17:44:39.319895   23378 fix.go:229] Guest: 2024-08-02 17:44:39.291917014 +0000 UTC Remote: 2024-08-02 17:44:39.216799126 +0000 UTC m=+71.718511413 (delta=75.117888ms)
	I0802 17:44:39.319910   23378 fix.go:200] guest clock delta is within tolerance: 75.117888ms
	I0802 17:44:39.319915   23378 start.go:83] releasing machines lock for "ha-652395-m02", held for 21.17719812s
	I0802 17:44:39.319936   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:39.320212   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:44:39.323026   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.323417   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.323439   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.325474   23378 out.go:177] * Found network options:
	I0802 17:44:39.326716   23378 out.go:177]   - NO_PROXY=192.168.39.210
	W0802 17:44:39.327787   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	I0802 17:44:39.327816   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:39.328312   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:39.328501   23378 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 17:44:39.328594   23378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 17:44:39.328634   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	W0802 17:44:39.328708   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	I0802 17:44:39.328783   23378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 17:44:39.328802   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 17:44:39.331373   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.331699   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.331786   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.331814   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.332009   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:39.332135   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:39.332156   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:39.332157   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:39.332313   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:39.332325   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 17:44:39.332476   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 17:44:39.332486   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	I0802 17:44:39.332592   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 17:44:39.332754   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	I0802 17:44:39.572919   23378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 17:44:39.578198   23378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 17:44:39.578263   23378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 17:44:39.593447   23378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 17:44:39.593468   23378 start.go:495] detecting cgroup driver to use...
	I0802 17:44:39.593521   23378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 17:44:39.608957   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:44:39.623784   23378 docker.go:217] disabling cri-docker service (if available) ...
	I0802 17:44:39.623836   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 17:44:39.637348   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 17:44:39.650294   23378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 17:44:39.756801   23378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 17:44:39.923019   23378 docker.go:233] disabling docker service ...
	I0802 17:44:39.923080   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 17:44:39.936516   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 17:44:39.948188   23378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 17:44:40.080438   23378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 17:44:40.210892   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 17:44:40.223531   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:44:40.240537   23378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 17:44:40.240619   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.249975   23378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 17:44:40.250029   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.259558   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.268600   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.277635   23378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 17:44:40.286932   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.295795   23378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.310995   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:44:40.320007   23378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 17:44:40.328202   23378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 17:44:40.328250   23378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 17:44:40.339337   23378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 17:44:40.348015   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:44:40.464729   23378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 17:44:40.594497   23378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 17:44:40.594590   23378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 17:44:40.602160   23378 start.go:563] Will wait 60s for crictl version
	I0802 17:44:40.602208   23378 ssh_runner.go:195] Run: which crictl
	I0802 17:44:40.605735   23378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 17:44:40.639247   23378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 17:44:40.639336   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:44:40.665526   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:44:40.695767   23378 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 17:44:40.697068   23378 out.go:177]   - env NO_PROXY=192.168.39.210
	I0802 17:44:40.698166   23378 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 17:44:40.700893   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:40.701259   23378 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:44:31 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 17:44:40.701277   23378 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 17:44:40.701456   23378 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 17:44:40.705310   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:44:40.717053   23378 mustload.go:65] Loading cluster: ha-652395
	I0802 17:44:40.717224   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:44:40.717523   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:40.717561   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:40.732668   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0802 17:44:40.733146   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:40.733614   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:40.733637   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:40.733935   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:40.734106   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:44:40.735587   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:44:40.735855   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:40.735886   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:40.750415   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33583
	I0802 17:44:40.750836   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:40.751334   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:40.751359   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:40.751671   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:40.751897   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:44:40.752040   23378 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395 for IP: 192.168.39.220
	I0802 17:44:40.752049   23378 certs.go:194] generating shared ca certs ...
	I0802 17:44:40.752062   23378 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:44:40.752173   23378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 17:44:40.752208   23378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 17:44:40.752217   23378 certs.go:256] generating profile certs ...
	I0802 17:44:40.752288   23378 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key
	I0802 17:44:40.752312   23378 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.cf86fe99
	I0802 17:44:40.752323   23378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.cf86fe99 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.210 192.168.39.220 192.168.39.254]
	I0802 17:44:40.937178   23378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.cf86fe99 ...
	I0802 17:44:40.937208   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.cf86fe99: {Name:mk49cecd55ad68f4b0a4a86e8e819e8a12c316a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:44:40.937394   23378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.cf86fe99 ...
	I0802 17:44:40.937408   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.cf86fe99: {Name:mk536771078b4c1dcd616008289f4b5227c528ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:44:40.937478   23378 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.cf86fe99 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt
	I0802 17:44:40.937624   23378 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.cf86fe99 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key
	I0802 17:44:40.937757   23378 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key
	I0802 17:44:40.937774   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0802 17:44:40.937787   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0802 17:44:40.937805   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0802 17:44:40.937820   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0802 17:44:40.937835   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0802 17:44:40.937851   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0802 17:44:40.937866   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0802 17:44:40.937877   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0802 17:44:40.937922   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 17:44:40.937953   23378 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 17:44:40.937964   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 17:44:40.937989   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 17:44:40.938016   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 17:44:40.938040   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 17:44:40.938080   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:44:40.938111   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem -> /usr/share/ca-certificates/12547.pem
	I0802 17:44:40.938145   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /usr/share/ca-certificates/125472.pem
	I0802 17:44:40.938160   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:44:40.938197   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:44:40.941249   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:40.941603   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:44:40.941624   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:40.941813   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:44:40.942023   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:44:40.942176   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:44:40.942313   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:44:41.015442   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0802 17:44:41.020810   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0802 17:44:41.031202   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0802 17:44:41.035560   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0802 17:44:41.046604   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0802 17:44:41.050405   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0802 17:44:41.060839   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0802 17:44:41.065156   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0802 17:44:41.075342   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0802 17:44:41.079408   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0802 17:44:41.090157   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0802 17:44:41.094084   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0802 17:44:41.104144   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 17:44:41.126799   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 17:44:41.150692   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 17:44:41.173258   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 17:44:41.199287   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0802 17:44:41.224657   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 17:44:41.252426   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 17:44:41.275051   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 17:44:41.296731   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 17:44:41.318223   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 17:44:41.339361   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 17:44:41.360210   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0802 17:44:41.375805   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0802 17:44:41.391015   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0802 17:44:41.406187   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0802 17:44:41.421086   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0802 17:44:41.435625   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0802 17:44:41.450732   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0802 17:44:41.466404   23378 ssh_runner.go:195] Run: openssl version
	I0802 17:44:41.471656   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 17:44:41.481222   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 17:44:41.485089   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 17:44:41.485127   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 17:44:41.490157   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 17:44:41.499377   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 17:44:41.508987   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 17:44:41.512907   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 17:44:41.512959   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 17:44:41.518247   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 17:44:41.527658   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 17:44:41.539710   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:44:41.544001   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:44:41.544053   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:44:41.549106   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 17:44:41.558711   23378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 17:44:41.562637   23378 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 17:44:41.562683   23378 kubeadm.go:934] updating node {m02 192.168.39.220 8443 v1.30.3 crio true true} ...
	I0802 17:44:41.562771   23378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-652395-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 17:44:41.562801   23378 kube-vip.go:115] generating kube-vip config ...
	I0802 17:44:41.562843   23378 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0802 17:44:41.579868   23378 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0802 17:44:41.579940   23378 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0802 17:44:41.579993   23378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 17:44:41.589299   23378 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0802 17:44:41.589376   23378 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0802 17:44:41.598147   23378 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0802 17:44:41.598174   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0802 17:44:41.598227   23378 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0802 17:44:41.598243   23378 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0802 17:44:41.598269   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0802 17:44:41.602252   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0802 17:44:41.602274   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0802 17:44:47.952915   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:44:47.967335   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0802 17:44:47.967428   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0802 17:44:47.971470   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0802 17:44:47.971510   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0802 17:44:49.758724   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0802 17:44:49.758815   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0802 17:44:49.763825   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0802 17:44:49.763897   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0802 17:44:49.987338   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0802 17:44:49.996030   23378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0802 17:44:50.012191   23378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 17:44:50.027095   23378 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0802 17:44:50.042184   23378 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0802 17:44:50.045750   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:44:50.056837   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:44:50.186311   23378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:44:50.203539   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:44:50.203985   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:44:50.204036   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:44:50.219116   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46255
	I0802 17:44:50.219550   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:44:50.220062   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:44:50.220077   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:44:50.220412   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:44:50.220611   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:44:50.220760   23378 start.go:317] joinCluster: &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:44:50.220854   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0802 17:44:50.220875   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:44:50.223780   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:50.224134   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:44:50.224164   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:44:50.224277   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:44:50.224441   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:44:50.224578   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:44:50.224725   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:44:50.385078   23378 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:44:50.385119   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0pq2v.9gfsnqj2az7qhpq0 --discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-652395-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443"
	I0802 17:45:12.471079   23378 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p0pq2v.9gfsnqj2az7qhpq0 --discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-652395-m02 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443": (22.085935956s)
	I0802 17:45:12.471132   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0802 17:45:12.973256   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-652395-m02 minikube.k8s.io/updated_at=2024_08_02T17_45_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=ha-652395 minikube.k8s.io/primary=false
	I0802 17:45:13.095420   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-652395-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0802 17:45:13.220320   23378 start.go:319] duration metric: took 22.999556113s to joinCluster
	I0802 17:45:13.220413   23378 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:45:13.220724   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:45:13.221807   23378 out.go:177] * Verifying Kubernetes components...
	I0802 17:45:13.223081   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:45:13.485242   23378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:45:13.518043   23378 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:45:13.518398   23378 kapi.go:59] client config for ha-652395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key", CAFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0802 17:45:13.518489   23378 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.210:8443
	I0802 17:45:13.518746   23378 node_ready.go:35] waiting up to 6m0s for node "ha-652395-m02" to be "Ready" ...
	I0802 17:45:13.518858   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:13.518871   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:13.518882   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:13.518892   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:13.548732   23378 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0802 17:45:14.019746   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:14.019774   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:14.019786   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:14.019794   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:14.027852   23378 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0802 17:45:14.519852   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:14.519870   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:14.519879   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:14.519882   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:14.531836   23378 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0802 17:45:15.019756   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:15.019782   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:15.019796   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:15.019802   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:15.023265   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:15.519640   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:15.519661   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:15.519668   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:15.519673   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:15.522645   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:15.523350   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:16.019475   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:16.019499   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:16.019511   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:16.019581   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:16.022465   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:16.519379   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:16.519406   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:16.519414   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:16.519417   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:16.545704   23378 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0802 17:45:17.019706   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:17.019731   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:17.019743   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:17.019749   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:17.022789   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:17.519541   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:17.519562   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:17.519571   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:17.519577   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:17.524279   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:45:17.525076   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:18.018940   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:18.018961   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:18.018968   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:18.018971   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:18.022419   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:18.519766   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:18.519790   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:18.519800   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:18.519806   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:18.523269   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:19.019258   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:19.019280   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:19.019291   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:19.019295   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:19.022515   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:19.519292   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:19.519318   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:19.519329   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:19.519335   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:19.522831   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:20.019888   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:20.019911   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:20.019919   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:20.019924   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:20.097223   23378 round_trippers.go:574] Response Status: 200 OK in 77 milliseconds
	I0802 17:45:20.098490   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:20.519346   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:20.519367   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:20.519375   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:20.519380   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:20.522553   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:21.019724   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:21.019744   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:21.019752   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:21.019756   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:21.023258   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:21.519006   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:21.519030   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:21.519038   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:21.519042   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:21.522058   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:22.019027   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:22.019057   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:22.019070   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:22.019078   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:22.022631   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:22.518979   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:22.519004   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:22.519015   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:22.519020   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:22.522160   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:22.523156   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:23.019167   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:23.019191   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:23.019205   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:23.019209   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:23.022648   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:23.519023   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:23.519046   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:23.519054   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:23.519058   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:23.522488   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:24.019356   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:24.019378   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:24.019387   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:24.019391   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:24.022642   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:24.519672   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:24.519693   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:24.519704   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:24.519709   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:24.522892   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:24.523502   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:25.019971   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:25.019996   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:25.020004   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:25.020008   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:25.023202   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:25.519619   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:25.519641   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:25.519648   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:25.519654   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:25.522907   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:26.019939   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:26.019963   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:26.019970   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:26.019975   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:26.023421   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:26.519530   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:26.519552   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:26.519560   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:26.519563   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:26.522693   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:27.019819   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:27.019840   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:27.019848   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:27.019853   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:27.023031   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:27.023498   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:27.519213   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:27.519239   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:27.519249   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:27.519255   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:27.523180   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:28.019899   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:28.019918   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:28.019926   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:28.019929   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:28.023009   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:28.519448   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:28.519473   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:28.519481   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:28.519487   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:28.522731   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:29.019738   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:29.019764   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:29.019774   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:29.019780   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:29.023263   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:29.023781   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:29.519127   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:29.519156   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:29.519165   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:29.519177   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:29.522294   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:30.018921   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:30.018945   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:30.018952   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:30.018957   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:30.021949   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:30.519412   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:30.519433   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:30.519441   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:30.519444   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:30.522558   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:31.019167   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:31.019193   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:31.019202   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:31.019209   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:31.021946   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:31.519950   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:31.519976   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:31.519983   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:31.519986   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:31.523156   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:31.523846   23378 node_ready.go:53] node "ha-652395-m02" has status "Ready":"False"
	I0802 17:45:32.019168   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:32.019192   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.019202   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.019206   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.022636   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:32.023238   23378 node_ready.go:49] node "ha-652395-m02" has status "Ready":"True"
	I0802 17:45:32.023263   23378 node_ready.go:38] duration metric: took 18.504493823s for node "ha-652395-m02" to be "Ready" ...
	I0802 17:45:32.023276   23378 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:45:32.023364   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:45:32.023376   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.023387   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.023393   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.027721   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:45:32.033894   23378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.033970   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7bnn4
	I0802 17:45:32.033979   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.033987   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.033991   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.036511   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.037139   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:32.037159   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.037170   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.037177   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.039503   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.040279   23378 pod_ready.go:92] pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.040307   23378 pod_ready.go:81] duration metric: took 6.388729ms for pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.040321   23378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.040384   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gzmsx
	I0802 17:45:32.040397   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.040407   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.040416   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.042585   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.043300   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:32.043316   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.043323   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.043327   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.045476   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.045919   23378 pod_ready.go:92] pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.045936   23378 pod_ready.go:81] duration metric: took 5.60755ms for pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.045944   23378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.045985   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395
	I0802 17:45:32.045992   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.045999   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.046002   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.047897   23378 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0802 17:45:32.048387   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:32.048401   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.048408   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.048412   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.050267   23378 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0802 17:45:32.050713   23378 pod_ready.go:92] pod "etcd-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.050732   23378 pod_ready.go:81] duration metric: took 4.781908ms for pod "etcd-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.050743   23378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.050845   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395-m02
	I0802 17:45:32.050857   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.050866   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.050873   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.052891   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.053386   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:32.053399   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.053409   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.053415   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.055225   23378 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0802 17:45:32.055582   23378 pod_ready.go:92] pod "etcd-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.055597   23378 pod_ready.go:81] duration metric: took 4.847646ms for pod "etcd-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.055613   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.219994   23378 request.go:629] Waited for 164.311449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395
	I0802 17:45:32.220046   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395
	I0802 17:45:32.220051   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.220059   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.220062   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.223269   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:32.419314   23378 request.go:629] Waited for 195.314796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:32.419367   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:32.419372   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.419379   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.419383   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.422144   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.422635   23378 pod_ready.go:92] pod "kube-apiserver-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.422653   23378 pod_ready.go:81] duration metric: took 367.032422ms for pod "kube-apiserver-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.422665   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.619816   23378 request.go:629] Waited for 197.083521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m02
	I0802 17:45:32.619891   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m02
	I0802 17:45:32.619898   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.619938   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.619947   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.623539   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:32.819747   23378 request.go:629] Waited for 195.467246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:32.819815   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:32.819829   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:32.819841   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:32.819849   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:32.822359   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:32.822831   23378 pod_ready.go:92] pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:32.822849   23378 pod_ready.go:81] duration metric: took 400.175771ms for pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:32.822862   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:33.019985   23378 request.go:629] Waited for 197.042473ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395
	I0802 17:45:33.020039   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395
	I0802 17:45:33.020045   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:33.020053   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:33.020058   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:33.023333   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:33.219330   23378 request.go:629] Waited for 195.37121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:33.219395   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:33.219402   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:33.219415   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:33.219421   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:33.222461   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:33.222970   23378 pod_ready.go:92] pod "kube-controller-manager-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:33.222989   23378 pod_ready.go:81] duration metric: took 400.118179ms for pod "kube-controller-manager-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:33.223001   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:33.420137   23378 request.go:629] Waited for 197.048244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m02
	I0802 17:45:33.420225   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m02
	I0802 17:45:33.420236   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:33.420247   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:33.420256   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:33.423944   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:33.619883   23378 request.go:629] Waited for 195.369597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:33.619962   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:33.619969   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:33.619980   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:33.619990   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:33.623435   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:33.623841   23378 pod_ready.go:92] pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:33.623857   23378 pod_ready.go:81] duration metric: took 400.845557ms for pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:33.623869   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l7npk" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:33.820053   23378 request.go:629] Waited for 196.116391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7npk
	I0802 17:45:33.820133   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7npk
	I0802 17:45:33.820139   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:33.820147   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:33.820152   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:33.822992   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:34.019973   23378 request.go:629] Waited for 196.348436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:34.020037   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:34.020045   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:34.020057   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:34.020062   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:34.023256   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:34.023795   23378 pod_ready.go:92] pod "kube-proxy-l7npk" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:34.023812   23378 pod_ready.go:81] duration metric: took 399.936451ms for pod "kube-proxy-l7npk" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:34.023822   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rtbb6" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:34.219932   23378 request.go:629] Waited for 196.048785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtbb6
	I0802 17:45:34.220019   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtbb6
	I0802 17:45:34.220030   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:34.220041   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:34.220048   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:34.222994   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:34.419914   23378 request.go:629] Waited for 196.363004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:34.419967   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:34.419972   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:34.419980   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:34.419984   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:34.423711   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:34.424305   23378 pod_ready.go:92] pod "kube-proxy-rtbb6" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:34.424351   23378 pod_ready.go:81] duration metric: took 400.520107ms for pod "kube-proxy-rtbb6" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:34.424369   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:34.619408   23378 request.go:629] Waited for 194.97766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395
	I0802 17:45:34.619493   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395
	I0802 17:45:34.619504   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:34.619515   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:34.619522   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:34.622283   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:34.819200   23378 request.go:629] Waited for 196.25146ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:34.819285   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:45:34.819296   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:34.819306   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:34.819320   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:34.822755   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:34.823703   23378 pod_ready.go:92] pod "kube-scheduler-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:34.823724   23378 pod_ready.go:81] duration metric: took 399.347186ms for pod "kube-scheduler-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:34.823736   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:35.019687   23378 request.go:629] Waited for 195.881363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m02
	I0802 17:45:35.019743   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m02
	I0802 17:45:35.019748   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.019758   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.019765   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.023283   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:35.220230   23378 request.go:629] Waited for 196.388546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:35.220284   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:45:35.220290   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.220300   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.220306   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.223673   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:35.224098   23378 pod_ready.go:92] pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:45:35.224115   23378 pod_ready.go:81] duration metric: took 400.371867ms for pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:45:35.224125   23378 pod_ready.go:38] duration metric: took 3.200833837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:45:35.224138   23378 api_server.go:52] waiting for apiserver process to appear ...
	I0802 17:45:35.224194   23378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:45:35.240089   23378 api_server.go:72] duration metric: took 22.019638509s to wait for apiserver process to appear ...
	I0802 17:45:35.240112   23378 api_server.go:88] waiting for apiserver healthz status ...
	I0802 17:45:35.240131   23378 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0802 17:45:35.244269   23378 api_server.go:279] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0802 17:45:35.244336   23378 round_trippers.go:463] GET https://192.168.39.210:8443/version
	I0802 17:45:35.244343   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.244351   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.244355   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.245181   23378 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0802 17:45:35.245275   23378 api_server.go:141] control plane version: v1.30.3
	I0802 17:45:35.245292   23378 api_server.go:131] duration metric: took 5.174481ms to wait for apiserver health ...
	I0802 17:45:35.245300   23378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 17:45:35.419746   23378 request.go:629] Waited for 174.36045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:45:35.419813   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:45:35.419818   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.419825   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.419830   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.424825   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:45:35.429454   23378 system_pods.go:59] 17 kube-system pods found
	I0802 17:45:35.429480   23378 system_pods.go:61] "coredns-7db6d8ff4d-7bnn4" [b4eedd91-fcf6-4cef-81b0-d043c38cc00c] Running
	I0802 17:45:35.429485   23378 system_pods.go:61] "coredns-7db6d8ff4d-gzmsx" [f5baa21b-dddf-43b6-a5a2-2b8f8e452a83] Running
	I0802 17:45:35.429489   23378 system_pods.go:61] "etcd-ha-652395" [221bc5ed-c9a4-41ee-8294-965ad8f9165a] Running
	I0802 17:45:35.429492   23378 system_pods.go:61] "etcd-ha-652395-m02" [92e40550-4a35-4769-a0a7-6a6d5c192af8] Running
	I0802 17:45:35.429495   23378 system_pods.go:61] "kindnet-7n2wh" [33a684f1-19a3-472e-ba29-d1fae4edab93] Running
	I0802 17:45:35.429498   23378 system_pods.go:61] "kindnet-bjrkb" [04d82e24-8aa1-4c71-b904-03b53de10142] Running
	I0802 17:45:35.429501   23378 system_pods.go:61] "kube-apiserver-ha-652395" [d004ddbd-7ea1-4702-ac84-3681621c7a13] Running
	I0802 17:45:35.429505   23378 system_pods.go:61] "kube-apiserver-ha-652395-m02" [a1dc5d2f-2a1c-4853-a83e-05f665ee4f00] Running
	I0802 17:45:35.429508   23378 system_pods.go:61] "kube-controller-manager-ha-652395" [e2ecf3df-c8af-4407-84a4-bfd052a3f5aa] Running
	I0802 17:45:35.429511   23378 system_pods.go:61] "kube-controller-manager-ha-652395-m02" [f2761a4e-d3dd-434f-b717-094d0b53d1cb] Running
	I0802 17:45:35.429514   23378 system_pods.go:61] "kube-proxy-l7npk" [8db2cf39-da2a-42f7-8f34-6cd8f61d0b08] Running
	I0802 17:45:35.429517   23378 system_pods.go:61] "kube-proxy-rtbb6" [4e5ce587-0e3a-4cae-9358-66ceaaf05f58] Running
	I0802 17:45:35.429520   23378 system_pods.go:61] "kube-scheduler-ha-652395" [6dec3f93-8fa3-4045-8e81-deec2cc26ae6] Running
	I0802 17:45:35.429523   23378 system_pods.go:61] "kube-scheduler-ha-652395-m02" [dd4ed827-ccf7-4f23-8a1d-0823cde7e577] Running
	I0802 17:45:35.429526   23378 system_pods.go:61] "kube-vip-ha-652395" [1ee810a9-9d93-4cff-a5bb-60bab005eb5c] Running
	I0802 17:45:35.429528   23378 system_pods.go:61] "kube-vip-ha-652395-m02" [e16bf714-b09a-490d-80ad-73f7a4b71c27] Running
	I0802 17:45:35.429531   23378 system_pods.go:61] "storage-provisioner" [149760da-f585-48bf-9cc8-63ff848cf3c8] Running
	I0802 17:45:35.429536   23378 system_pods.go:74] duration metric: took 184.22892ms to wait for pod list to return data ...
	I0802 17:45:35.429544   23378 default_sa.go:34] waiting for default service account to be created ...
	I0802 17:45:35.620020   23378 request.go:629] Waited for 190.404655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/default/serviceaccounts
	I0802 17:45:35.620077   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/default/serviceaccounts
	I0802 17:45:35.620083   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.620091   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.620097   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.623444   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:45:35.623671   23378 default_sa.go:45] found service account: "default"
	I0802 17:45:35.623688   23378 default_sa.go:55] duration metric: took 194.138636ms for default service account to be created ...
	I0802 17:45:35.623696   23378 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 17:45:35.819867   23378 request.go:629] Waited for 196.105859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:45:35.819953   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:45:35.819965   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:35.819975   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:35.819981   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:35.825590   23378 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0802 17:45:35.829401   23378 system_pods.go:86] 17 kube-system pods found
	I0802 17:45:35.829428   23378 system_pods.go:89] "coredns-7db6d8ff4d-7bnn4" [b4eedd91-fcf6-4cef-81b0-d043c38cc00c] Running
	I0802 17:45:35.829434   23378 system_pods.go:89] "coredns-7db6d8ff4d-gzmsx" [f5baa21b-dddf-43b6-a5a2-2b8f8e452a83] Running
	I0802 17:45:35.829438   23378 system_pods.go:89] "etcd-ha-652395" [221bc5ed-c9a4-41ee-8294-965ad8f9165a] Running
	I0802 17:45:35.829443   23378 system_pods.go:89] "etcd-ha-652395-m02" [92e40550-4a35-4769-a0a7-6a6d5c192af8] Running
	I0802 17:45:35.829448   23378 system_pods.go:89] "kindnet-7n2wh" [33a684f1-19a3-472e-ba29-d1fae4edab93] Running
	I0802 17:45:35.829452   23378 system_pods.go:89] "kindnet-bjrkb" [04d82e24-8aa1-4c71-b904-03b53de10142] Running
	I0802 17:45:35.829455   23378 system_pods.go:89] "kube-apiserver-ha-652395" [d004ddbd-7ea1-4702-ac84-3681621c7a13] Running
	I0802 17:45:35.829460   23378 system_pods.go:89] "kube-apiserver-ha-652395-m02" [a1dc5d2f-2a1c-4853-a83e-05f665ee4f00] Running
	I0802 17:45:35.829463   23378 system_pods.go:89] "kube-controller-manager-ha-652395" [e2ecf3df-c8af-4407-84a4-bfd052a3f5aa] Running
	I0802 17:45:35.829467   23378 system_pods.go:89] "kube-controller-manager-ha-652395-m02" [f2761a4e-d3dd-434f-b717-094d0b53d1cb] Running
	I0802 17:45:35.829471   23378 system_pods.go:89] "kube-proxy-l7npk" [8db2cf39-da2a-42f7-8f34-6cd8f61d0b08] Running
	I0802 17:45:35.829474   23378 system_pods.go:89] "kube-proxy-rtbb6" [4e5ce587-0e3a-4cae-9358-66ceaaf05f58] Running
	I0802 17:45:35.829479   23378 system_pods.go:89] "kube-scheduler-ha-652395" [6dec3f93-8fa3-4045-8e81-deec2cc26ae6] Running
	I0802 17:45:35.829482   23378 system_pods.go:89] "kube-scheduler-ha-652395-m02" [dd4ed827-ccf7-4f23-8a1d-0823cde7e577] Running
	I0802 17:45:35.829489   23378 system_pods.go:89] "kube-vip-ha-652395" [1ee810a9-9d93-4cff-a5bb-60bab005eb5c] Running
	I0802 17:45:35.829492   23378 system_pods.go:89] "kube-vip-ha-652395-m02" [e16bf714-b09a-490d-80ad-73f7a4b71c27] Running
	I0802 17:45:35.829495   23378 system_pods.go:89] "storage-provisioner" [149760da-f585-48bf-9cc8-63ff848cf3c8] Running
	I0802 17:45:35.829501   23378 system_pods.go:126] duration metric: took 205.801478ms to wait for k8s-apps to be running ...
	I0802 17:45:35.829511   23378 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 17:45:35.829552   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:45:35.844416   23378 system_svc.go:56] duration metric: took 14.896551ms WaitForService to wait for kubelet
	I0802 17:45:35.844449   23378 kubeadm.go:582] duration metric: took 22.624001927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:45:35.844472   23378 node_conditions.go:102] verifying NodePressure condition ...
	I0802 17:45:36.019899   23378 request.go:629] Waited for 175.358786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes
	I0802 17:45:36.019973   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes
	I0802 17:45:36.019979   23378 round_trippers.go:469] Request Headers:
	I0802 17:45:36.019986   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:45:36.019991   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:45:36.022913   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:45:36.023637   23378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:45:36.023657   23378 node_conditions.go:123] node cpu capacity is 2
	I0802 17:45:36.023667   23378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:45:36.023670   23378 node_conditions.go:123] node cpu capacity is 2
	I0802 17:45:36.023674   23378 node_conditions.go:105] duration metric: took 179.19768ms to run NodePressure ...
	I0802 17:45:36.023684   23378 start.go:241] waiting for startup goroutines ...
	I0802 17:45:36.023707   23378 start.go:255] writing updated cluster config ...
	I0802 17:45:36.025800   23378 out.go:177] 
	I0802 17:45:36.027316   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:45:36.027411   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:45:36.028935   23378 out.go:177] * Starting "ha-652395-m03" control-plane node in "ha-652395" cluster
	I0802 17:45:36.030014   23378 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:45:36.030042   23378 cache.go:56] Caching tarball of preloaded images
	I0802 17:45:36.030149   23378 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 17:45:36.030162   23378 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 17:45:36.030247   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:45:36.030437   23378 start.go:360] acquireMachinesLock for ha-652395-m03: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 17:45:36.030483   23378 start.go:364] duration metric: took 24.923µs to acquireMachinesLock for "ha-652395-m03"
	I0802 17:45:36.030501   23378 start.go:93] Provisioning new machine with config: &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:45:36.030592   23378 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0802 17:45:36.032070   23378 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 17:45:36.032163   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:45:36.032197   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:45:36.047016   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0802 17:45:36.047629   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:45:36.048162   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:45:36.048186   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:45:36.048493   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:45:36.048684   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetMachineName
	I0802 17:45:36.048823   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:45:36.048964   23378 start.go:159] libmachine.API.Create for "ha-652395" (driver="kvm2")
	I0802 17:45:36.048994   23378 client.go:168] LocalClient.Create starting
	I0802 17:45:36.049027   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 17:45:36.049068   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:45:36.049089   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:45:36.049157   23378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 17:45:36.049198   23378 main.go:141] libmachine: Decoding PEM data...
	I0802 17:45:36.049214   23378 main.go:141] libmachine: Parsing certificate...
	I0802 17:45:36.049233   23378 main.go:141] libmachine: Running pre-create checks...
	I0802 17:45:36.049242   23378 main.go:141] libmachine: (ha-652395-m03) Calling .PreCreateCheck
	I0802 17:45:36.049413   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetConfigRaw
	I0802 17:45:36.049924   23378 main.go:141] libmachine: Creating machine...
	I0802 17:45:36.049938   23378 main.go:141] libmachine: (ha-652395-m03) Calling .Create
	I0802 17:45:36.050035   23378 main.go:141] libmachine: (ha-652395-m03) Creating KVM machine...
	I0802 17:45:36.051210   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found existing default KVM network
	I0802 17:45:36.051359   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found existing private KVM network mk-ha-652395
	I0802 17:45:36.051513   23378 main.go:141] libmachine: (ha-652395-m03) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03 ...
	I0802 17:45:36.051537   23378 main.go:141] libmachine: (ha-652395-m03) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 17:45:36.051582   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:36.051497   24173 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:45:36.051645   23378 main.go:141] libmachine: (ha-652395-m03) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 17:45:36.283642   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:36.283510   24173 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa...
	I0802 17:45:36.404288   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:36.404161   24173 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/ha-652395-m03.rawdisk...
	I0802 17:45:36.404325   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Writing magic tar header
	I0802 17:45:36.404340   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Writing SSH key tar header
	I0802 17:45:36.404367   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:36.404314   24173 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03 ...
	I0802 17:45:36.404478   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03
	I0802 17:45:36.404506   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 17:45:36.404522   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03 (perms=drwx------)
	I0802 17:45:36.404541   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 17:45:36.404555   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 17:45:36.404580   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:45:36.404598   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 17:45:36.404608   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 17:45:36.404623   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 17:45:36.404631   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home/jenkins
	I0802 17:45:36.404641   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Checking permissions on dir: /home
	I0802 17:45:36.404658   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Skipping /home - not owner
	I0802 17:45:36.404672   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 17:45:36.404688   23378 main.go:141] libmachine: (ha-652395-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 17:45:36.404699   23378 main.go:141] libmachine: (ha-652395-m03) Creating domain...
	I0802 17:45:36.405773   23378 main.go:141] libmachine: (ha-652395-m03) define libvirt domain using xml: 
	I0802 17:45:36.405799   23378 main.go:141] libmachine: (ha-652395-m03) <domain type='kvm'>
	I0802 17:45:36.405811   23378 main.go:141] libmachine: (ha-652395-m03)   <name>ha-652395-m03</name>
	I0802 17:45:36.405818   23378 main.go:141] libmachine: (ha-652395-m03)   <memory unit='MiB'>2200</memory>
	I0802 17:45:36.405827   23378 main.go:141] libmachine: (ha-652395-m03)   <vcpu>2</vcpu>
	I0802 17:45:36.405837   23378 main.go:141] libmachine: (ha-652395-m03)   <features>
	I0802 17:45:36.405843   23378 main.go:141] libmachine: (ha-652395-m03)     <acpi/>
	I0802 17:45:36.405850   23378 main.go:141] libmachine: (ha-652395-m03)     <apic/>
	I0802 17:45:36.405859   23378 main.go:141] libmachine: (ha-652395-m03)     <pae/>
	I0802 17:45:36.405866   23378 main.go:141] libmachine: (ha-652395-m03)     
	I0802 17:45:36.405876   23378 main.go:141] libmachine: (ha-652395-m03)   </features>
	I0802 17:45:36.405892   23378 main.go:141] libmachine: (ha-652395-m03)   <cpu mode='host-passthrough'>
	I0802 17:45:36.405924   23378 main.go:141] libmachine: (ha-652395-m03)   
	I0802 17:45:36.405968   23378 main.go:141] libmachine: (ha-652395-m03)   </cpu>
	I0802 17:45:36.405983   23378 main.go:141] libmachine: (ha-652395-m03)   <os>
	I0802 17:45:36.405995   23378 main.go:141] libmachine: (ha-652395-m03)     <type>hvm</type>
	I0802 17:45:36.406008   23378 main.go:141] libmachine: (ha-652395-m03)     <boot dev='cdrom'/>
	I0802 17:45:36.406027   23378 main.go:141] libmachine: (ha-652395-m03)     <boot dev='hd'/>
	I0802 17:45:36.406038   23378 main.go:141] libmachine: (ha-652395-m03)     <bootmenu enable='no'/>
	I0802 17:45:36.406044   23378 main.go:141] libmachine: (ha-652395-m03)   </os>
	I0802 17:45:36.406054   23378 main.go:141] libmachine: (ha-652395-m03)   <devices>
	I0802 17:45:36.406066   23378 main.go:141] libmachine: (ha-652395-m03)     <disk type='file' device='cdrom'>
	I0802 17:45:36.406083   23378 main.go:141] libmachine: (ha-652395-m03)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/boot2docker.iso'/>
	I0802 17:45:36.406095   23378 main.go:141] libmachine: (ha-652395-m03)       <target dev='hdc' bus='scsi'/>
	I0802 17:45:36.406106   23378 main.go:141] libmachine: (ha-652395-m03)       <readonly/>
	I0802 17:45:36.406123   23378 main.go:141] libmachine: (ha-652395-m03)     </disk>
	I0802 17:45:36.406137   23378 main.go:141] libmachine: (ha-652395-m03)     <disk type='file' device='disk'>
	I0802 17:45:36.406151   23378 main.go:141] libmachine: (ha-652395-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 17:45:36.406164   23378 main.go:141] libmachine: (ha-652395-m03)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/ha-652395-m03.rawdisk'/>
	I0802 17:45:36.406172   23378 main.go:141] libmachine: (ha-652395-m03)       <target dev='hda' bus='virtio'/>
	I0802 17:45:36.406182   23378 main.go:141] libmachine: (ha-652395-m03)     </disk>
	I0802 17:45:36.406190   23378 main.go:141] libmachine: (ha-652395-m03)     <interface type='network'>
	I0802 17:45:36.406196   23378 main.go:141] libmachine: (ha-652395-m03)       <source network='mk-ha-652395'/>
	I0802 17:45:36.406206   23378 main.go:141] libmachine: (ha-652395-m03)       <model type='virtio'/>
	I0802 17:45:36.406216   23378 main.go:141] libmachine: (ha-652395-m03)     </interface>
	I0802 17:45:36.406231   23378 main.go:141] libmachine: (ha-652395-m03)     <interface type='network'>
	I0802 17:45:36.406243   23378 main.go:141] libmachine: (ha-652395-m03)       <source network='default'/>
	I0802 17:45:36.406254   23378 main.go:141] libmachine: (ha-652395-m03)       <model type='virtio'/>
	I0802 17:45:36.406262   23378 main.go:141] libmachine: (ha-652395-m03)     </interface>
	I0802 17:45:36.406272   23378 main.go:141] libmachine: (ha-652395-m03)     <serial type='pty'>
	I0802 17:45:36.406278   23378 main.go:141] libmachine: (ha-652395-m03)       <target port='0'/>
	I0802 17:45:36.406284   23378 main.go:141] libmachine: (ha-652395-m03)     </serial>
	I0802 17:45:36.406290   23378 main.go:141] libmachine: (ha-652395-m03)     <console type='pty'>
	I0802 17:45:36.406302   23378 main.go:141] libmachine: (ha-652395-m03)       <target type='serial' port='0'/>
	I0802 17:45:36.406329   23378 main.go:141] libmachine: (ha-652395-m03)     </console>
	I0802 17:45:36.406348   23378 main.go:141] libmachine: (ha-652395-m03)     <rng model='virtio'>
	I0802 17:45:36.406363   23378 main.go:141] libmachine: (ha-652395-m03)       <backend model='random'>/dev/random</backend>
	I0802 17:45:36.406379   23378 main.go:141] libmachine: (ha-652395-m03)     </rng>
	I0802 17:45:36.406391   23378 main.go:141] libmachine: (ha-652395-m03)     
	I0802 17:45:36.406400   23378 main.go:141] libmachine: (ha-652395-m03)     
	I0802 17:45:36.406409   23378 main.go:141] libmachine: (ha-652395-m03)   </devices>
	I0802 17:45:36.406420   23378 main.go:141] libmachine: (ha-652395-m03) </domain>
	I0802 17:45:36.406441   23378 main.go:141] libmachine: (ha-652395-m03) 
	I0802 17:45:36.413279   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:43:36:db in network default
	I0802 17:45:36.413820   23378 main.go:141] libmachine: (ha-652395-m03) Ensuring networks are active...
	I0802 17:45:36.413862   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:36.414657   23378 main.go:141] libmachine: (ha-652395-m03) Ensuring network default is active
	I0802 17:45:36.414968   23378 main.go:141] libmachine: (ha-652395-m03) Ensuring network mk-ha-652395 is active
	I0802 17:45:36.415435   23378 main.go:141] libmachine: (ha-652395-m03) Getting domain xml...
	I0802 17:45:36.416067   23378 main.go:141] libmachine: (ha-652395-m03) Creating domain...
	I0802 17:45:37.658293   23378 main.go:141] libmachine: (ha-652395-m03) Waiting to get IP...
	I0802 17:45:37.659127   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:37.659538   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:37.659586   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:37.659533   24173 retry.go:31] will retry after 278.414041ms: waiting for machine to come up
	I0802 17:45:37.940057   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:37.940529   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:37.940562   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:37.940509   24173 retry.go:31] will retry after 280.874502ms: waiting for machine to come up
	I0802 17:45:38.223047   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:38.223534   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:38.223558   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:38.223511   24173 retry.go:31] will retry after 340.959076ms: waiting for machine to come up
	I0802 17:45:38.566122   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:38.566544   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:38.566567   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:38.566510   24173 retry.go:31] will retry after 573.792131ms: waiting for machine to come up
	I0802 17:45:39.142236   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:39.142669   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:39.142701   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:39.142606   24173 retry.go:31] will retry after 480.184052ms: waiting for machine to come up
	I0802 17:45:39.624228   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:39.624766   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:39.624794   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:39.624719   24173 retry.go:31] will retry after 640.998486ms: waiting for machine to come up
	I0802 17:45:40.267613   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:40.267998   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:40.268025   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:40.267953   24173 retry.go:31] will retry after 1.037547688s: waiting for machine to come up
	I0802 17:45:41.306919   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:41.307496   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:41.307524   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:41.307443   24173 retry.go:31] will retry after 1.487765562s: waiting for machine to come up
	I0802 17:45:42.796982   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:42.797437   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:42.797468   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:42.797389   24173 retry.go:31] will retry after 1.712646843s: waiting for machine to come up
	I0802 17:45:44.512180   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:44.512627   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:44.512655   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:44.512581   24173 retry.go:31] will retry after 2.117852157s: waiting for machine to come up
	I0802 17:45:46.632392   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:46.632797   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:46.632825   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:46.632740   24173 retry.go:31] will retry after 1.87779902s: waiting for machine to come up
	I0802 17:45:48.512236   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:48.512705   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:48.512731   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:48.512659   24173 retry.go:31] will retry after 2.645114759s: waiting for machine to come up
	I0802 17:45:51.159777   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:51.160216   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:51.160240   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:51.160201   24173 retry.go:31] will retry after 3.916763457s: waiting for machine to come up
	I0802 17:45:55.080334   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:55.080702   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find current IP address of domain ha-652395-m03 in network mk-ha-652395
	I0802 17:45:55.080728   23378 main.go:141] libmachine: (ha-652395-m03) DBG | I0802 17:45:55.080659   24173 retry.go:31] will retry after 4.726540914s: waiting for machine to come up
	I0802 17:45:59.810530   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:59.810997   23378 main.go:141] libmachine: (ha-652395-m03) Found IP for machine: 192.168.39.62
	I0802 17:45:59.811032   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has current primary IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:59.811041   23378 main.go:141] libmachine: (ha-652395-m03) Reserving static IP address...
	I0802 17:45:59.811400   23378 main.go:141] libmachine: (ha-652395-m03) DBG | unable to find host DHCP lease matching {name: "ha-652395-m03", mac: "52:54:00:23:60:5b", ip: "192.168.39.62"} in network mk-ha-652395
	I0802 17:45:59.884517   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Getting to WaitForSSH function...
	I0802 17:45:59.884551   23378 main.go:141] libmachine: (ha-652395-m03) Reserved static IP address: 192.168.39.62
	I0802 17:45:59.884566   23378 main.go:141] libmachine: (ha-652395-m03) Waiting for SSH to be available...
	I0802 17:45:59.887972   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:59.888390   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:23:60:5b}
	I0802 17:45:59.888430   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:45:59.888577   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Using SSH client type: external
	I0802 17:45:59.888599   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa (-rw-------)
	I0802 17:45:59.888629   23378 main.go:141] libmachine: (ha-652395-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 17:45:59.888641   23378 main.go:141] libmachine: (ha-652395-m03) DBG | About to run SSH command:
	I0802 17:45:59.888655   23378 main.go:141] libmachine: (ha-652395-m03) DBG | exit 0
	I0802 17:46:00.015258   23378 main.go:141] libmachine: (ha-652395-m03) DBG | SSH cmd err, output: <nil>: 
	I0802 17:46:00.015565   23378 main.go:141] libmachine: (ha-652395-m03) KVM machine creation complete!
	I0802 17:46:00.015949   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetConfigRaw
	I0802 17:46:00.016541   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:00.016754   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:00.016928   23378 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 17:46:00.016942   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetState
	I0802 17:46:00.018209   23378 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 17:46:00.018225   23378 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 17:46:00.018234   23378 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 17:46:00.018242   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.020481   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.020805   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.020830   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.020978   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:00.021123   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.021274   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.021372   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:00.021519   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:00.021771   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:00.021787   23378 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 17:46:00.126517   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:46:00.126552   23378 main.go:141] libmachine: Detecting the provisioner...
	I0802 17:46:00.126565   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.129422   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.129818   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.129863   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.129986   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:00.130170   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.130329   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.130493   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:00.130653   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:00.130820   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:00.130832   23378 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 17:46:00.239767   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 17:46:00.239864   23378 main.go:141] libmachine: found compatible host: buildroot
	I0802 17:46:00.239880   23378 main.go:141] libmachine: Provisioning with buildroot...
	I0802 17:46:00.239890   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetMachineName
	I0802 17:46:00.240107   23378 buildroot.go:166] provisioning hostname "ha-652395-m03"
	I0802 17:46:00.240134   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetMachineName
	I0802 17:46:00.240295   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.242732   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.243176   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.243203   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.243353   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:00.243521   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.243667   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.243786   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:00.243946   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:00.244098   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:00.244110   23378 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-652395-m03 && echo "ha-652395-m03" | sudo tee /etc/hostname
	I0802 17:46:00.365172   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-652395-m03
	
	I0802 17:46:00.365198   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.367989   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.368345   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.368367   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.368509   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:00.368726   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.368909   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.369054   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:00.369248   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:00.369421   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:00.369446   23378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-652395-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-652395-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-652395-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 17:46:00.484223   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:46:00.484256   23378 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 17:46:00.484280   23378 buildroot.go:174] setting up certificates
	I0802 17:46:00.484290   23378 provision.go:84] configureAuth start
	I0802 17:46:00.484300   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetMachineName
	I0802 17:46:00.484588   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:46:00.487348   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.487676   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.487713   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.487867   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.490085   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.490431   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.490458   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.490591   23378 provision.go:143] copyHostCerts
	I0802 17:46:00.490631   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:46:00.490680   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 17:46:00.490691   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:46:00.490769   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 17:46:00.490952   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:46:00.490984   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 17:46:00.490993   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:46:00.491048   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 17:46:00.491135   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:46:00.491159   23378 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 17:46:00.491168   23378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:46:00.491202   23378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 17:46:00.491269   23378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.ha-652395-m03 san=[127.0.0.1 192.168.39.62 ha-652395-m03 localhost minikube]
	I0802 17:46:00.884913   23378 provision.go:177] copyRemoteCerts
	I0802 17:46:00.884973   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 17:46:00.884998   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:00.888105   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.888518   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:00.888550   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:00.888766   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:00.888984   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:00.889229   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:00.889398   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:46:00.972704   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0802 17:46:00.972791   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0802 17:46:00.995560   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0802 17:46:00.995621   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 17:46:01.017657   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0802 17:46:01.017722   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 17:46:01.040053   23378 provision.go:87] duration metric: took 555.74644ms to configureAuth
	I0802 17:46:01.040086   23378 buildroot.go:189] setting minikube options for container-runtime
	I0802 17:46:01.040357   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:46:01.040467   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:01.043361   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.043739   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.043774   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.043894   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:01.044105   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.044265   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.044411   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:01.044579   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:01.044759   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:01.044772   23378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 17:46:01.311642   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 17:46:01.311677   23378 main.go:141] libmachine: Checking connection to Docker...
	I0802 17:46:01.311688   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetURL
	I0802 17:46:01.313011   23378 main.go:141] libmachine: (ha-652395-m03) DBG | Using libvirt version 6000000
	I0802 17:46:01.315324   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.315713   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.315743   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.315993   23378 main.go:141] libmachine: Docker is up and running!
	I0802 17:46:01.316006   23378 main.go:141] libmachine: Reticulating splines...
	I0802 17:46:01.316012   23378 client.go:171] duration metric: took 25.267010388s to LocalClient.Create
	I0802 17:46:01.316034   23378 start.go:167] duration metric: took 25.267071211s to libmachine.API.Create "ha-652395"
	I0802 17:46:01.316048   23378 start.go:293] postStartSetup for "ha-652395-m03" (driver="kvm2")
	I0802 17:46:01.316058   23378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 17:46:01.316073   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:01.316307   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 17:46:01.316344   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:01.318593   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.318910   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.318935   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.319053   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:01.319231   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.319431   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:01.319684   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:46:01.401372   23378 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 17:46:01.405564   23378 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 17:46:01.405593   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 17:46:01.405666   23378 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 17:46:01.405735   23378 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 17:46:01.405744   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /etc/ssl/certs/125472.pem
	I0802 17:46:01.405819   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 17:46:01.416344   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:46:01.439311   23378 start.go:296] duration metric: took 123.247965ms for postStartSetup
	I0802 17:46:01.439392   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetConfigRaw
	I0802 17:46:01.439971   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:46:01.442873   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.443331   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.443362   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.443659   23378 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:46:01.443867   23378 start.go:128] duration metric: took 25.413264333s to createHost
	I0802 17:46:01.443890   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:01.446191   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.446520   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.446552   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.446692   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:01.446864   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.447045   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.447228   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:01.447388   23378 main.go:141] libmachine: Using SSH client type: native
	I0802 17:46:01.447534   23378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I0802 17:46:01.447544   23378 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 17:46:01.555441   23378 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722620761.533849867
	
	I0802 17:46:01.555469   23378 fix.go:216] guest clock: 1722620761.533849867
	I0802 17:46:01.555482   23378 fix.go:229] Guest: 2024-08-02 17:46:01.533849867 +0000 UTC Remote: 2024-08-02 17:46:01.443878214 +0000 UTC m=+153.945590491 (delta=89.971653ms)
	I0802 17:46:01.555506   23378 fix.go:200] guest clock delta is within tolerance: 89.971653ms
	I0802 17:46:01.555514   23378 start.go:83] releasing machines lock for "ha-652395-m03", held for 25.52502111s
	I0802 17:46:01.555542   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:01.555795   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:46:01.558412   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.558778   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.558808   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.560907   23378 out.go:177] * Found network options:
	I0802 17:46:01.562135   23378 out.go:177]   - NO_PROXY=192.168.39.210,192.168.39.220
	W0802 17:46:01.563401   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	W0802 17:46:01.563424   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	I0802 17:46:01.563437   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:01.563984   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:01.564186   23378 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:46:01.564285   23378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 17:46:01.564324   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	W0802 17:46:01.564412   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	W0802 17:46:01.564437   23378 proxy.go:119] fail to check proxy env: Error ip not in block
	I0802 17:46:01.564500   23378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 17:46:01.564522   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:46:01.566998   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.567329   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.567356   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.567378   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.567560   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:01.567736   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.567819   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:01.567853   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:01.567899   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:01.568087   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:46:01.568093   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:46:01.568261   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:46:01.568420   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:46:01.568557   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:46:01.796482   23378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 17:46:01.802346   23378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 17:46:01.802418   23378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 17:46:01.821079   23378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 17:46:01.821100   23378 start.go:495] detecting cgroup driver to use...
	I0802 17:46:01.821156   23378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 17:46:01.837276   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:46:01.850195   23378 docker.go:217] disabling cri-docker service (if available) ...
	I0802 17:46:01.850246   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 17:46:01.863020   23378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 17:46:01.876817   23378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 17:46:01.996317   23378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 17:46:02.155795   23378 docker.go:233] disabling docker service ...
	I0802 17:46:02.155854   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 17:46:02.171577   23378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 17:46:02.185476   23378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 17:46:02.316663   23378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 17:46:02.441608   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 17:46:02.456599   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:46:02.474518   23378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 17:46:02.474602   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.484459   23378 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 17:46:02.484524   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.493884   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.503576   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.513428   23378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 17:46:02.523479   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.532970   23378 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.549805   23378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:46:02.559448   23378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 17:46:02.568425   23378 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 17:46:02.568503   23378 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 17:46:02.581992   23378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 17:46:02.591609   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:46:02.726113   23378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 17:46:02.874460   23378 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 17:46:02.874528   23378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 17:46:02.879933   23378 start.go:563] Will wait 60s for crictl version
	I0802 17:46:02.879998   23378 ssh_runner.go:195] Run: which crictl
	I0802 17:46:02.883528   23378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 17:46:02.923272   23378 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 17:46:02.923376   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:46:02.949589   23378 ssh_runner.go:195] Run: crio --version
	I0802 17:46:02.979299   23378 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 17:46:02.980662   23378 out.go:177]   - env NO_PROXY=192.168.39.210
	I0802 17:46:02.981881   23378 out.go:177]   - env NO_PROXY=192.168.39.210,192.168.39.220
	I0802 17:46:02.982980   23378 main.go:141] libmachine: (ha-652395-m03) Calling .GetIP
	I0802 17:46:02.985700   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:02.986094   23378 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:46:02.986121   23378 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:46:02.986355   23378 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 17:46:02.990125   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:46:03.001417   23378 mustload.go:65] Loading cluster: ha-652395
	I0802 17:46:03.001685   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:46:03.002055   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:46:03.002102   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:46:03.017195   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0802 17:46:03.017622   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:46:03.018112   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:46:03.018135   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:46:03.018412   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:46:03.018590   23378 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:46:03.020165   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:46:03.020466   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:46:03.020509   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:46:03.036320   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0802 17:46:03.036679   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:46:03.037087   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:46:03.037105   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:46:03.037410   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:46:03.037590   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:46:03.037752   23378 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395 for IP: 192.168.39.62
	I0802 17:46:03.037762   23378 certs.go:194] generating shared ca certs ...
	I0802 17:46:03.037775   23378 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:46:03.037885   23378 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 17:46:03.037921   23378 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 17:46:03.037929   23378 certs.go:256] generating profile certs ...
	I0802 17:46:03.037991   23378 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key
	I0802 17:46:03.038015   23378 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.dbe97182
	I0802 17:46:03.038026   23378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.dbe97182 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.210 192.168.39.220 192.168.39.62 192.168.39.254]
	I0802 17:46:03.165060   23378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.dbe97182 ...
	I0802 17:46:03.165090   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.dbe97182: {Name:mkbcf4904b96ff44c4fb2909d0c0c62a3672ca2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:46:03.165254   23378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.dbe97182 ...
	I0802 17:46:03.165265   23378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.dbe97182: {Name:mkd9fd8dcc922620ae47f15cba16ed6aa3bd324c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:46:03.165334   23378 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.dbe97182 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt
	I0802 17:46:03.165480   23378 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.dbe97182 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key
	I0802 17:46:03.165612   23378 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key
	I0802 17:46:03.165629   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0802 17:46:03.165642   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0802 17:46:03.165659   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0802 17:46:03.165678   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0802 17:46:03.165697   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0802 17:46:03.165715   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0802 17:46:03.165733   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0802 17:46:03.165751   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0802 17:46:03.165819   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 17:46:03.165858   23378 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 17:46:03.165865   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 17:46:03.165887   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 17:46:03.165909   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 17:46:03.165931   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 17:46:03.165967   23378 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:46:03.165996   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:46:03.166009   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem -> /usr/share/ca-certificates/12547.pem
	I0802 17:46:03.166021   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /usr/share/ca-certificates/125472.pem
	I0802 17:46:03.166054   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:46:03.169127   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:46:03.169589   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:46:03.169623   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:46:03.169814   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:46:03.170145   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:46:03.170291   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:46:03.170518   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:46:03.251459   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0802 17:46:03.256083   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0802 17:46:03.267440   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0802 17:46:03.271046   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0802 17:46:03.280731   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0802 17:46:03.284525   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0802 17:46:03.293929   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0802 17:46:03.297873   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0802 17:46:03.307359   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0802 17:46:03.313412   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0802 17:46:03.322935   23378 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0802 17:46:03.326564   23378 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0802 17:46:03.335924   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 17:46:03.360122   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 17:46:03.384035   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 17:46:03.405619   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 17:46:03.428732   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0802 17:46:03.451179   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 17:46:03.472724   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 17:46:03.495092   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 17:46:03.519137   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 17:46:03.542671   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 17:46:03.564900   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 17:46:03.586591   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0802 17:46:03.602580   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0802 17:46:03.618040   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0802 17:46:03.633043   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0802 17:46:03.648771   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0802 17:46:03.664016   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0802 17:46:03.679357   23378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0802 17:46:03.695254   23378 ssh_runner.go:195] Run: openssl version
	I0802 17:46:03.700807   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 17:46:03.710353   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:46:03.714382   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:46:03.714436   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:46:03.720090   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 17:46:03.729674   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 17:46:03.739249   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 17:46:03.743193   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 17:46:03.743244   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 17:46:03.748385   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 17:46:03.757952   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 17:46:03.767280   23378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 17:46:03.771207   23378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 17:46:03.771248   23378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 17:46:03.776380   23378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 17:46:03.786123   23378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 17:46:03.789636   23378 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 17:46:03.789693   23378 kubeadm.go:934] updating node {m03 192.168.39.62 8443 v1.30.3 crio true true} ...
	I0802 17:46:03.789784   23378 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-652395-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 17:46:03.789808   23378 kube-vip.go:115] generating kube-vip config ...
	I0802 17:46:03.789841   23378 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0802 17:46:03.807546   23378 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0802 17:46:03.807608   23378 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0802 17:46:03.807703   23378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 17:46:03.818185   23378 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0802 17:46:03.818229   23378 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0802 17:46:03.829488   23378 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0802 17:46:03.829497   23378 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0802 17:46:03.829512   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0802 17:46:03.829516   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0802 17:46:03.829536   23378 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0802 17:46:03.829571   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0802 17:46:03.829583   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:46:03.829571   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0802 17:46:03.844302   23378 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0802 17:46:03.844337   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0802 17:46:03.844357   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0802 17:46:03.844379   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0802 17:46:03.844399   23378 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0802 17:46:03.844408   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0802 17:46:03.865692   23378 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0802 17:46:03.865736   23378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0802 17:46:04.683374   23378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0802 17:46:04.692482   23378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0802 17:46:04.708380   23378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 17:46:04.723672   23378 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0802 17:46:04.738690   23378 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0802 17:46:04.742181   23378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 17:46:04.753005   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:46:04.870718   23378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:46:04.887521   23378 host.go:66] Checking if "ha-652395" exists ...
	I0802 17:46:04.887970   23378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:46:04.888027   23378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:46:04.903924   23378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0802 17:46:04.904401   23378 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:46:04.904877   23378 main.go:141] libmachine: Using API Version  1
	I0802 17:46:04.904897   23378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:46:04.905212   23378 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:46:04.905395   23378 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:46:04.905538   23378 start.go:317] joinCluster: &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:46:04.905654   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0802 17:46:04.905667   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:46:04.908305   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:46:04.908844   23378 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:46:04.908871   23378 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:46:04.909014   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:46:04.909313   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:46:04.909513   23378 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:46:04.909674   23378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:46:05.073630   23378 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:46:05.073675   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3vyejt.kbnmanrwnqax2ca9 --discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-652395-m03 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443"
	I0802 17:46:28.601554   23378 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3vyejt.kbnmanrwnqax2ca9 --discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-652395-m03 --control-plane --apiserver-advertise-address=192.168.39.62 --apiserver-bind-port=8443": (23.527855462s)
	I0802 17:46:28.601590   23378 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0802 17:46:29.216420   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-652395-m03 minikube.k8s.io/updated_at=2024_08_02T17_46_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=ha-652395 minikube.k8s.io/primary=false
	I0802 17:46:29.336594   23378 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-652395-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0802 17:46:29.452215   23378 start.go:319] duration metric: took 24.546671487s to joinCluster
	I0802 17:46:29.452292   23378 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 17:46:29.452629   23378 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:46:29.453605   23378 out.go:177] * Verifying Kubernetes components...
	I0802 17:46:29.454946   23378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:46:29.703779   23378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:46:29.760927   23378 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:46:29.761307   23378 kapi.go:59] client config for ha-652395: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key", CAFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0802 17:46:29.761394   23378 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.210:8443
	I0802 17:46:29.761649   23378 node_ready.go:35] waiting up to 6m0s for node "ha-652395-m03" to be "Ready" ...
	I0802 17:46:29.761745   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:29.761755   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:29.761767   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:29.761776   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:29.770376   23378 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0802 17:46:30.261866   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:30.261893   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:30.261904   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:30.261911   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:30.265682   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:30.762410   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:30.762435   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:30.762444   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:30.762451   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:30.765614   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:31.262713   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:31.262736   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:31.262744   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:31.262754   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:31.267327   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:31.761938   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:31.761968   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:31.761982   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:31.761987   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:31.764863   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:31.765340   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:32.262817   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:32.262840   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:32.262851   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:32.262857   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:32.266230   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:32.762068   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:32.762087   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:32.762095   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:32.762098   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:32.765625   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:33.261868   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:33.261888   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:33.261897   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:33.261902   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:33.265036   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:33.762209   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:33.762229   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:33.762236   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:33.762239   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:33.766920   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:33.767820   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:34.262870   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:34.262889   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:34.262897   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:34.262900   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:34.266363   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:34.762163   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:34.762185   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:34.762193   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:34.762197   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:34.765390   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:35.262210   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:35.262233   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:35.262244   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:35.262251   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:35.265436   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:35.761831   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:35.761850   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:35.761859   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:35.761865   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:35.771561   23378 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0802 17:46:35.772661   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:36.261968   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:36.261991   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:36.262002   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:36.262007   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:36.265294   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:36.762238   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:36.762263   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:36.762278   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:36.762284   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:36.765814   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:37.262760   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:37.262786   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:37.262796   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:37.262801   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:37.266752   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:37.762694   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:37.762716   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:37.762726   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:37.762733   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:37.765685   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:38.261881   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:38.261911   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:38.261922   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:38.261927   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:38.266922   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:38.267761   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:38.762572   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:38.762601   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:38.762611   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:38.762616   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:38.765699   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:39.262553   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:39.262576   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:39.262585   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:39.262589   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:39.265635   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:39.762404   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:39.762428   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:39.762439   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:39.762445   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:39.766257   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:40.261822   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:40.261844   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:40.261851   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:40.261856   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:40.265926   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:40.762356   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:40.762374   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:40.762384   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:40.762388   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:40.766293   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:40.766891   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:41.262203   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:41.262226   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:41.262237   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:41.262242   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:41.266069   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:41.761929   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:41.761957   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:41.761968   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:41.761974   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:41.765904   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:42.261878   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:42.261902   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:42.261910   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:42.261913   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:42.265102   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:42.762829   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:42.762853   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:42.762865   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:42.762869   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:42.766809   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:42.767367   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:43.262326   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:43.262347   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:43.262355   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:43.262359   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:43.266042   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:43.762046   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:43.762067   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:43.762075   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:43.762079   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:43.765536   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:44.262774   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:44.262798   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:44.262807   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:44.262812   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:44.266011   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:44.761948   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:44.761972   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:44.761983   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:44.761997   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:44.765716   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:45.262435   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:45.262454   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:45.262463   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:45.262466   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:45.265931   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:45.266633   23378 node_ready.go:53] node "ha-652395-m03" has status "Ready":"False"
	I0802 17:46:45.762462   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:45.762479   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:45.762488   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:45.762493   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:45.765789   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:46.262561   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:46.262580   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:46.262588   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:46.262593   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:46.265482   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:46.762168   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:46.762191   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:46.762198   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:46.762203   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:46.765612   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:47.262815   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:47.262836   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.262843   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.262848   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.265976   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:47.761953   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:47.761981   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.761995   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.762000   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.765436   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:47.766105   23378 node_ready.go:49] node "ha-652395-m03" has status "Ready":"True"
	I0802 17:46:47.766127   23378 node_ready.go:38] duration metric: took 18.004460114s for node "ha-652395-m03" to be "Ready" ...
	I0802 17:46:47.766136   23378 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:46:47.766214   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:46:47.766226   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.766235   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.766243   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.774008   23378 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0802 17:46:47.781343   23378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.781426   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7bnn4
	I0802 17:46:47.781431   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.781439   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.781443   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.785589   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:47.786687   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:47.786707   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.786717   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.786723   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.789953   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:47.790718   23378 pod_ready.go:92] pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:47.790733   23378 pod_ready.go:81] duration metric: took 9.363791ms for pod "coredns-7db6d8ff4d-7bnn4" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.790742   23378 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.790800   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-gzmsx
	I0802 17:46:47.790811   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.790817   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.790824   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.793539   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:47.794362   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:47.794375   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.794382   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.794386   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.796542   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:47.797035   23378 pod_ready.go:92] pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:47.797053   23378 pod_ready.go:81] duration metric: took 6.304591ms for pod "coredns-7db6d8ff4d-gzmsx" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.797061   23378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.797109   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395
	I0802 17:46:47.797117   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.797123   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.797126   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.799477   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:47.800384   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:47.800398   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.800405   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.800409   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.805504   23378 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0802 17:46:47.806133   23378 pod_ready.go:92] pod "etcd-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:47.806153   23378 pod_ready.go:81] duration metric: took 9.084753ms for pod "etcd-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.806164   23378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.806225   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395-m02
	I0802 17:46:47.806236   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.806246   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.806257   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.809373   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:47.809899   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:47.809913   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.809920   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.809925   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.812199   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:47.812585   23378 pod_ready.go:92] pod "etcd-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:47.812600   23378 pod_ready.go:81] duration metric: took 6.429757ms for pod "etcd-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.812608   23378 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:47.962984   23378 request.go:629] Waited for 150.32177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395-m03
	I0802 17:46:47.963058   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/etcd-ha-652395-m03
	I0802 17:46:47.963066   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:47.963074   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:47.963079   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:47.966757   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:48.162525   23378 request.go:629] Waited for 194.948292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:48.162578   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:48.162583   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:48.162590   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:48.162594   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:48.166781   23378 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0802 17:46:48.167780   23378 pod_ready.go:92] pod "etcd-ha-652395-m03" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:48.167804   23378 pod_ready.go:81] duration metric: took 355.188036ms for pod "etcd-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:48.167827   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:48.362846   23378 request.go:629] Waited for 194.928144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395
	I0802 17:46:48.362907   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395
	I0802 17:46:48.362912   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:48.362920   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:48.362927   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:48.366366   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:48.562612   23378 request.go:629] Waited for 195.371826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:48.562666   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:48.562671   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:48.562679   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:48.562685   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:48.565549   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:48.566597   23378 pod_ready.go:92] pod "kube-apiserver-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:48.566617   23378 pod_ready.go:81] duration metric: took 398.78187ms for pod "kube-apiserver-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:48.566626   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:48.762745   23378 request.go:629] Waited for 195.99138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m02
	I0802 17:46:48.762810   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m02
	I0802 17:46:48.762817   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:48.762827   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:48.762835   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:48.766560   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:48.962702   23378 request.go:629] Waited for 195.42677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:48.962762   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:48.962767   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:48.962775   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:48.962779   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:48.966389   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:48.966908   23378 pod_ready.go:92] pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:48.966926   23378 pod_ready.go:81] duration metric: took 400.293446ms for pod "kube-apiserver-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:48.966935   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:49.161948   23378 request.go:629] Waited for 194.945915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m03
	I0802 17:46:49.162042   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-652395-m03
	I0802 17:46:49.162052   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:49.162061   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:49.162068   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:49.165467   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:49.362582   23378 request.go:629] Waited for 196.423946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:49.362663   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:49.362668   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:49.362676   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:49.362684   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:49.366680   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:49.367279   23378 pod_ready.go:92] pod "kube-apiserver-ha-652395-m03" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:49.367302   23378 pod_ready.go:81] duration metric: took 400.357196ms for pod "kube-apiserver-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:49.367315   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:49.562217   23378 request.go:629] Waited for 194.831384ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395
	I0802 17:46:49.562284   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395
	I0802 17:46:49.562289   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:49.562297   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:49.562301   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:49.565924   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:49.762426   23378 request.go:629] Waited for 195.094293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:49.762490   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:49.762495   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:49.762502   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:49.762505   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:49.769266   23378 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0802 17:46:49.769865   23378 pod_ready.go:92] pod "kube-controller-manager-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:49.769884   23378 pod_ready.go:81] duration metric: took 402.557554ms for pod "kube-controller-manager-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:49.769898   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:49.962495   23378 request.go:629] Waited for 192.522293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m02
	I0802 17:46:49.962561   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m02
	I0802 17:46:49.962569   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:49.962579   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:49.962584   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:49.966077   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.162234   23378 request.go:629] Waited for 195.342234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:50.162307   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:50.162314   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:50.162323   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:50.162330   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:50.165518   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.166128   23378 pod_ready.go:92] pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:50.166146   23378 pod_ready.go:81] duration metric: took 396.240391ms for pod "kube-controller-manager-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:50.166159   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:50.362704   23378 request.go:629] Waited for 196.446774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m03
	I0802 17:46:50.362782   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-652395-m03
	I0802 17:46:50.362791   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:50.362807   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:50.362816   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:50.366509   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.562760   23378 request.go:629] Waited for 195.399695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:50.562816   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:50.562821   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:50.562829   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:50.562834   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:50.566397   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.566910   23378 pod_ready.go:92] pod "kube-controller-manager-ha-652395-m03" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:50.566932   23378 pod_ready.go:81] duration metric: took 400.763468ms for pod "kube-controller-manager-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:50.566944   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fgghw" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:50.762495   23378 request.go:629] Waited for 195.482433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgghw
	I0802 17:46:50.762598   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgghw
	I0802 17:46:50.762610   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:50.762621   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:50.762630   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:50.766123   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.962069   23378 request.go:629] Waited for 195.088254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:50.962144   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:50.962153   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:50.962162   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:50.962170   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:50.965779   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:50.966239   23378 pod_ready.go:92] pod "kube-proxy-fgghw" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:50.966258   23378 pod_ready.go:81] duration metric: took 399.306891ms for pod "kube-proxy-fgghw" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:50.966268   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l7npk" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:51.162760   23378 request.go:629] Waited for 196.427311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7npk
	I0802 17:46:51.162850   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-l7npk
	I0802 17:46:51.162861   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:51.162873   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:51.162884   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:51.166652   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:51.362632   23378 request.go:629] Waited for 195.360523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:51.362692   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:51.362699   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:51.362710   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:51.362716   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:51.365680   23378 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0802 17:46:51.366225   23378 pod_ready.go:92] pod "kube-proxy-l7npk" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:51.366246   23378 pod_ready.go:81] duration metric: took 399.971201ms for pod "kube-proxy-l7npk" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:51.366258   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rtbb6" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:51.562327   23378 request.go:629] Waited for 195.965492ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtbb6
	I0802 17:46:51.562388   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtbb6
	I0802 17:46:51.562394   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:51.562402   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:51.562408   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:51.565803   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:51.763012   23378 request.go:629] Waited for 196.414319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:51.763086   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:51.763094   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:51.763124   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:51.763146   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:51.766283   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:51.767151   23378 pod_ready.go:92] pod "kube-proxy-rtbb6" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:51.767170   23378 pod_ready.go:81] duration metric: took 400.904121ms for pod "kube-proxy-rtbb6" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:51.767181   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:51.962170   23378 request.go:629] Waited for 194.91655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395
	I0802 17:46:51.962246   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395
	I0802 17:46:51.962251   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:51.962260   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:51.962270   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:51.965454   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.162468   23378 request.go:629] Waited for 196.404825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:52.162522   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395
	I0802 17:46:52.162526   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.162533   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.162538   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.165929   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.166675   23378 pod_ready.go:92] pod "kube-scheduler-ha-652395" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:52.166701   23378 pod_ready.go:81] duration metric: took 399.510556ms for pod "kube-scheduler-ha-652395" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:52.166715   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:52.362724   23378 request.go:629] Waited for 195.93744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m02
	I0802 17:46:52.362806   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m02
	I0802 17:46:52.362814   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.362823   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.362831   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.366089   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.561990   23378 request.go:629] Waited for 195.080467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:52.562062   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m02
	I0802 17:46:52.562088   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.562098   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.562106   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.565363   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.565956   23378 pod_ready.go:92] pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:52.565974   23378 pod_ready.go:81] duration metric: took 399.25227ms for pod "kube-scheduler-ha-652395-m02" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:52.565986   23378 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:52.762409   23378 request.go:629] Waited for 196.357205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m03
	I0802 17:46:52.762492   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-652395-m03
	I0802 17:46:52.762500   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.762510   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.762519   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.766379   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.962239   23378 request.go:629] Waited for 195.337218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:52.962309   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes/ha-652395-m03
	I0802 17:46:52.962314   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.962321   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.962325   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.966257   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:52.967021   23378 pod_ready.go:92] pod "kube-scheduler-ha-652395-m03" in "kube-system" namespace has status "Ready":"True"
	I0802 17:46:52.967048   23378 pod_ready.go:81] duration metric: took 401.05345ms for pod "kube-scheduler-ha-652395-m03" in "kube-system" namespace to be "Ready" ...
	I0802 17:46:52.967062   23378 pod_ready.go:38] duration metric: took 5.200911248s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 17:46:52.967083   23378 api_server.go:52] waiting for apiserver process to appear ...
	I0802 17:46:52.967160   23378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 17:46:52.983092   23378 api_server.go:72] duration metric: took 23.53076578s to wait for apiserver process to appear ...
	I0802 17:46:52.983133   23378 api_server.go:88] waiting for apiserver healthz status ...
	I0802 17:46:52.983158   23378 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0802 17:46:52.988942   23378 api_server.go:279] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0802 17:46:52.989100   23378 round_trippers.go:463] GET https://192.168.39.210:8443/version
	I0802 17:46:52.989130   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:52.989143   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:52.989150   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:52.990057   23378 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0802 17:46:52.990134   23378 api_server.go:141] control plane version: v1.30.3
	I0802 17:46:52.990165   23378 api_server.go:131] duration metric: took 7.024465ms to wait for apiserver health ...
	I0802 17:46:52.990175   23378 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 17:46:53.162370   23378 request.go:629] Waited for 172.120514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:46:53.162440   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:46:53.162447   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:53.162457   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:53.162470   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:53.168986   23378 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0802 17:46:53.175225   23378 system_pods.go:59] 24 kube-system pods found
	I0802 17:46:53.175252   23378 system_pods.go:61] "coredns-7db6d8ff4d-7bnn4" [b4eedd91-fcf6-4cef-81b0-d043c38cc00c] Running
	I0802 17:46:53.175256   23378 system_pods.go:61] "coredns-7db6d8ff4d-gzmsx" [f5baa21b-dddf-43b6-a5a2-2b8f8e452a83] Running
	I0802 17:46:53.175260   23378 system_pods.go:61] "etcd-ha-652395" [221bc5ed-c9a4-41ee-8294-965ad8f9165a] Running
	I0802 17:46:53.175265   23378 system_pods.go:61] "etcd-ha-652395-m02" [92e40550-4a35-4769-a0a7-6a6d5c192af8] Running
	I0802 17:46:53.175269   23378 system_pods.go:61] "etcd-ha-652395-m03" [55847ea3-fcfb-45c1-84ed-1c59f0103a8e] Running
	I0802 17:46:53.175272   23378 system_pods.go:61] "kindnet-7n2wh" [33a684f1-19a3-472e-ba29-d1fae4edab93] Running
	I0802 17:46:53.175274   23378 system_pods.go:61] "kindnet-bjrkb" [04d82e24-8aa1-4c71-b904-03b53de10142] Running
	I0802 17:46:53.175279   23378 system_pods.go:61] "kindnet-qw2hm" [a2caca18-72b5-4bf1-8e8f-da4f91ff543e] Running
	I0802 17:46:53.175284   23378 system_pods.go:61] "kube-apiserver-ha-652395" [d004ddbd-7ea1-4702-ac84-3681621c7a13] Running
	I0802 17:46:53.175289   23378 system_pods.go:61] "kube-apiserver-ha-652395-m02" [a1dc5d2f-2a1c-4853-a83e-05f665ee4f00] Running
	I0802 17:46:53.175293   23378 system_pods.go:61] "kube-apiserver-ha-652395-m03" [168a8066-6efe-459d-ae4e-7127c490a688] Running
	I0802 17:46:53.175298   23378 system_pods.go:61] "kube-controller-manager-ha-652395" [e2ecf3df-c8af-4407-84a4-bfd052a3f5aa] Running
	I0802 17:46:53.175306   23378 system_pods.go:61] "kube-controller-manager-ha-652395-m02" [f2761a4e-d3dd-434f-b717-094d0b53d1cb] Running
	I0802 17:46:53.175311   23378 system_pods.go:61] "kube-controller-manager-ha-652395-m03" [40ecf9df-0961-4ade-8f00-ba8915370106] Running
	I0802 17:46:53.175319   23378 system_pods.go:61] "kube-proxy-fgghw" [8a72fb78-19f9-499b-943b-fd95b0da2994] Running
	I0802 17:46:53.175324   23378 system_pods.go:61] "kube-proxy-l7npk" [8db2cf39-da2a-42f7-8f34-6cd8f61d0b08] Running
	I0802 17:46:53.175331   23378 system_pods.go:61] "kube-proxy-rtbb6" [4e5ce587-0e3a-4cae-9358-66ceaaf05f58] Running
	I0802 17:46:53.175336   23378 system_pods.go:61] "kube-scheduler-ha-652395" [6dec3f93-8fa3-4045-8e81-deec2cc26ae6] Running
	I0802 17:46:53.175342   23378 system_pods.go:61] "kube-scheduler-ha-652395-m02" [dd4ed827-ccf7-4f23-8a1d-0823cde7e577] Running
	I0802 17:46:53.175345   23378 system_pods.go:61] "kube-scheduler-ha-652395-m03" [bb4d3dc8-ddcc-487a-bc81-4ee5d6c33a54] Running
	I0802 17:46:53.175349   23378 system_pods.go:61] "kube-vip-ha-652395" [1ee810a9-9d93-4cff-a5bb-60bab005eb5c] Running
	I0802 17:46:53.175353   23378 system_pods.go:61] "kube-vip-ha-652395-m02" [e16bf714-b09a-490d-80ad-73f7a4b71c27] Running
	I0802 17:46:53.175358   23378 system_pods.go:61] "kube-vip-ha-652395-m03" [b041dfe9-0d53-429d-9b41-4e80d032c691] Running
	I0802 17:46:53.175363   23378 system_pods.go:61] "storage-provisioner" [149760da-f585-48bf-9cc8-63ff848cf3c8] Running
	I0802 17:46:53.175371   23378 system_pods.go:74] duration metric: took 185.190304ms to wait for pod list to return data ...
	I0802 17:46:53.175379   23378 default_sa.go:34] waiting for default service account to be created ...
	I0802 17:46:53.362278   23378 request.go:629] Waited for 186.808969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/default/serviceaccounts
	I0802 17:46:53.362334   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/default/serviceaccounts
	I0802 17:46:53.362338   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:53.362345   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:53.362350   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:53.365707   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:53.365846   23378 default_sa.go:45] found service account: "default"
	I0802 17:46:53.365865   23378 default_sa.go:55] duration metric: took 190.475476ms for default service account to be created ...
	I0802 17:46:53.365874   23378 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 17:46:53.562188   23378 request.go:629] Waited for 196.237037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:46:53.562289   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/namespaces/kube-system/pods
	I0802 17:46:53.562300   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:53.562324   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:53.562336   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:53.568799   23378 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0802 17:46:53.575257   23378 system_pods.go:86] 24 kube-system pods found
	I0802 17:46:53.575282   23378 system_pods.go:89] "coredns-7db6d8ff4d-7bnn4" [b4eedd91-fcf6-4cef-81b0-d043c38cc00c] Running
	I0802 17:46:53.575288   23378 system_pods.go:89] "coredns-7db6d8ff4d-gzmsx" [f5baa21b-dddf-43b6-a5a2-2b8f8e452a83] Running
	I0802 17:46:53.575293   23378 system_pods.go:89] "etcd-ha-652395" [221bc5ed-c9a4-41ee-8294-965ad8f9165a] Running
	I0802 17:46:53.575297   23378 system_pods.go:89] "etcd-ha-652395-m02" [92e40550-4a35-4769-a0a7-6a6d5c192af8] Running
	I0802 17:46:53.575301   23378 system_pods.go:89] "etcd-ha-652395-m03" [55847ea3-fcfb-45c1-84ed-1c59f0103a8e] Running
	I0802 17:46:53.575305   23378 system_pods.go:89] "kindnet-7n2wh" [33a684f1-19a3-472e-ba29-d1fae4edab93] Running
	I0802 17:46:53.575308   23378 system_pods.go:89] "kindnet-bjrkb" [04d82e24-8aa1-4c71-b904-03b53de10142] Running
	I0802 17:46:53.575312   23378 system_pods.go:89] "kindnet-qw2hm" [a2caca18-72b5-4bf1-8e8f-da4f91ff543e] Running
	I0802 17:46:53.575320   23378 system_pods.go:89] "kube-apiserver-ha-652395" [d004ddbd-7ea1-4702-ac84-3681621c7a13] Running
	I0802 17:46:53.575325   23378 system_pods.go:89] "kube-apiserver-ha-652395-m02" [a1dc5d2f-2a1c-4853-a83e-05f665ee4f00] Running
	I0802 17:46:53.575331   23378 system_pods.go:89] "kube-apiserver-ha-652395-m03" [168a8066-6efe-459d-ae4e-7127c490a688] Running
	I0802 17:46:53.575336   23378 system_pods.go:89] "kube-controller-manager-ha-652395" [e2ecf3df-c8af-4407-84a4-bfd052a3f5aa] Running
	I0802 17:46:53.575343   23378 system_pods.go:89] "kube-controller-manager-ha-652395-m02" [f2761a4e-d3dd-434f-b717-094d0b53d1cb] Running
	I0802 17:46:53.575347   23378 system_pods.go:89] "kube-controller-manager-ha-652395-m03" [40ecf9df-0961-4ade-8f00-ba8915370106] Running
	I0802 17:46:53.575354   23378 system_pods.go:89] "kube-proxy-fgghw" [8a72fb78-19f9-499b-943b-fd95b0da2994] Running
	I0802 17:46:53.575358   23378 system_pods.go:89] "kube-proxy-l7npk" [8db2cf39-da2a-42f7-8f34-6cd8f61d0b08] Running
	I0802 17:46:53.575364   23378 system_pods.go:89] "kube-proxy-rtbb6" [4e5ce587-0e3a-4cae-9358-66ceaaf05f58] Running
	I0802 17:46:53.575368   23378 system_pods.go:89] "kube-scheduler-ha-652395" [6dec3f93-8fa3-4045-8e81-deec2cc26ae6] Running
	I0802 17:46:53.575375   23378 system_pods.go:89] "kube-scheduler-ha-652395-m02" [dd4ed827-ccf7-4f23-8a1d-0823cde7e577] Running
	I0802 17:46:53.575379   23378 system_pods.go:89] "kube-scheduler-ha-652395-m03" [bb4d3dc8-ddcc-487a-bc81-4ee5d6c33a54] Running
	I0802 17:46:53.575385   23378 system_pods.go:89] "kube-vip-ha-652395" [1ee810a9-9d93-4cff-a5bb-60bab005eb5c] Running
	I0802 17:46:53.575389   23378 system_pods.go:89] "kube-vip-ha-652395-m02" [e16bf714-b09a-490d-80ad-73f7a4b71c27] Running
	I0802 17:46:53.575394   23378 system_pods.go:89] "kube-vip-ha-652395-m03" [b041dfe9-0d53-429d-9b41-4e80d032c691] Running
	I0802 17:46:53.575398   23378 system_pods.go:89] "storage-provisioner" [149760da-f585-48bf-9cc8-63ff848cf3c8] Running
	I0802 17:46:53.575405   23378 system_pods.go:126] duration metric: took 209.523014ms to wait for k8s-apps to be running ...
	I0802 17:46:53.575412   23378 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 17:46:53.575457   23378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 17:46:53.590689   23378 system_svc.go:56] duration metric: took 15.269351ms WaitForService to wait for kubelet
	I0802 17:46:53.590714   23378 kubeadm.go:582] duration metric: took 24.138389815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:46:53.590734   23378 node_conditions.go:102] verifying NodePressure condition ...
	I0802 17:46:53.762036   23378 request.go:629] Waited for 171.237519ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.210:8443/api/v1/nodes
	I0802 17:46:53.762120   23378 round_trippers.go:463] GET https://192.168.39.210:8443/api/v1/nodes
	I0802 17:46:53.762127   23378 round_trippers.go:469] Request Headers:
	I0802 17:46:53.762137   23378 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0802 17:46:53.762146   23378 round_trippers.go:473]     Accept: application/json, */*
	I0802 17:46:53.765799   23378 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0802 17:46:53.766763   23378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:46:53.766783   23378 node_conditions.go:123] node cpu capacity is 2
	I0802 17:46:53.766794   23378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:46:53.766797   23378 node_conditions.go:123] node cpu capacity is 2
	I0802 17:46:53.766801   23378 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 17:46:53.766804   23378 node_conditions.go:123] node cpu capacity is 2
	I0802 17:46:53.766808   23378 node_conditions.go:105] duration metric: took 176.069555ms to run NodePressure ...
	I0802 17:46:53.766819   23378 start.go:241] waiting for startup goroutines ...
	I0802 17:46:53.766843   23378 start.go:255] writing updated cluster config ...
	I0802 17:46:53.767126   23378 ssh_runner.go:195] Run: rm -f paused
	I0802 17:46:53.817853   23378 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 17:46:53.819753   23378 out.go:177] * Done! kubectl is now configured to use "ha-652395" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.287734171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621084287703991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a81f9db4-355f-41b0-b9fc-b42c6a2e2f04 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.288397604Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05fa3f4a-1063-438e-ba00-78697e9e2302 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.288496685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05fa3f4a-1063-438e-ba00-78697e9e2302 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.288738060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722620817831244072,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b353e683c45c418ba90bd8365315f70f4345b261ea75807fb0e25ace0ada37a,PodSandboxId:3c8b3d0b4534ff372a72475d9ae352350cc62b5ed3d449782921ad0e6924d428,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722620673221822366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673204543175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673177108050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-ddd
f-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CON
TAINER_RUNNING,CreatedAt:1722620661418638012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722620657
834179826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144aba25daef80ccf20ca69cdc8dd550073e91644ac9e89eb7319a4d55e2a90,PodSandboxId:f70dac73be7d9e0915854ddb5ed3d965ff13dca4abf762f0e090bc26f2546200,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172262064174
9083607,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe20ba3a1314e53eb4a1b834adcbbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158d622aed9a79cabdd29acb1449354000a5500e94b4ce4bb805d4b919f49439,PodSandboxId:b08c45a675b532dd7c8302a227735b183109fbee139b54920b94fbdf65735968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722620638737786218,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6,PodSandboxId:2f03523628a5ef263342e0ea8a644190931032104a376e1905ddccec32e34d31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722620638687123093,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722620638641647480,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722620638599194093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05fa3f4a-1063-438e-ba00-78697e9e2302 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.325958808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1575e90d-d554-418f-9148-bc954b697be8 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.326041063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1575e90d-d554-418f-9148-bc954b697be8 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.327796157Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2739c9f-b7f5-4c98-b720-227b1e6d5398 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.328270682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621084328248557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2739c9f-b7f5-4c98-b720-227b1e6d5398 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.329009179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b75f0e2-4f84-42ae-a649-e82aecce77c2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.329063901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b75f0e2-4f84-42ae-a649-e82aecce77c2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.329295840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722620817831244072,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b353e683c45c418ba90bd8365315f70f4345b261ea75807fb0e25ace0ada37a,PodSandboxId:3c8b3d0b4534ff372a72475d9ae352350cc62b5ed3d449782921ad0e6924d428,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722620673221822366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673204543175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673177108050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-ddd
f-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CON
TAINER_RUNNING,CreatedAt:1722620661418638012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722620657
834179826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144aba25daef80ccf20ca69cdc8dd550073e91644ac9e89eb7319a4d55e2a90,PodSandboxId:f70dac73be7d9e0915854ddb5ed3d965ff13dca4abf762f0e090bc26f2546200,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172262064174
9083607,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe20ba3a1314e53eb4a1b834adcbbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158d622aed9a79cabdd29acb1449354000a5500e94b4ce4bb805d4b919f49439,PodSandboxId:b08c45a675b532dd7c8302a227735b183109fbee139b54920b94fbdf65735968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722620638737786218,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6,PodSandboxId:2f03523628a5ef263342e0ea8a644190931032104a376e1905ddccec32e34d31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722620638687123093,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722620638641647480,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722620638599194093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b75f0e2-4f84-42ae-a649-e82aecce77c2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.369120622Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61163cb0-8b30-4d2f-928d-cdd2129fa8bb name=/runtime.v1.RuntimeService/Version
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.369216830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61163cb0-8b30-4d2f-928d-cdd2129fa8bb name=/runtime.v1.RuntimeService/Version
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.370135907Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=46b4b9ca-421c-42bb-969e-73bb6f1629f8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.371095641Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621084371061106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46b4b9ca-421c-42bb-969e-73bb6f1629f8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.371861886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebdfeead-f4a3-4675-9ed1-94251e5d1f78 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.371936599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebdfeead-f4a3-4675-9ed1-94251e5d1f78 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.372212981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722620817831244072,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b353e683c45c418ba90bd8365315f70f4345b261ea75807fb0e25ace0ada37a,PodSandboxId:3c8b3d0b4534ff372a72475d9ae352350cc62b5ed3d449782921ad0e6924d428,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722620673221822366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673204543175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673177108050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-ddd
f-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CON
TAINER_RUNNING,CreatedAt:1722620661418638012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722620657
834179826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144aba25daef80ccf20ca69cdc8dd550073e91644ac9e89eb7319a4d55e2a90,PodSandboxId:f70dac73be7d9e0915854ddb5ed3d965ff13dca4abf762f0e090bc26f2546200,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172262064174
9083607,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe20ba3a1314e53eb4a1b834adcbbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158d622aed9a79cabdd29acb1449354000a5500e94b4ce4bb805d4b919f49439,PodSandboxId:b08c45a675b532dd7c8302a227735b183109fbee139b54920b94fbdf65735968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722620638737786218,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6,PodSandboxId:2f03523628a5ef263342e0ea8a644190931032104a376e1905ddccec32e34d31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722620638687123093,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722620638641647480,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722620638599194093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebdfeead-f4a3-4675-9ed1-94251e5d1f78 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.412548218Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ffcf187-5d81-455c-bbee-fc31cfdc2320 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.412634264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ffcf187-5d81-455c-bbee-fc31cfdc2320 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.413670606Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12b414b0-f247-49a2-8bea-29b8b2fe9344 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.414209223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621084414179161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12b414b0-f247-49a2-8bea-29b8b2fe9344 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.414873849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e196fe0b-9cd2-4a6d-8da1-c027e16541e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.414985531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e196fe0b-9cd2-4a6d-8da1-c027e16541e2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:51:24 ha-652395 crio[673]: time="2024-08-02 17:51:24.415345594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722620817831244072,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b353e683c45c418ba90bd8365315f70f4345b261ea75807fb0e25ace0ada37a,PodSandboxId:3c8b3d0b4534ff372a72475d9ae352350cc62b5ed3d449782921ad0e6924d428,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722620673221822366,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673204543175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722620673177108050,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-ddd
f-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CON
TAINER_RUNNING,CreatedAt:1722620661418638012,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722620657
834179826,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6144aba25daef80ccf20ca69cdc8dd550073e91644ac9e89eb7319a4d55e2a90,PodSandboxId:f70dac73be7d9e0915854ddb5ed3d965ff13dca4abf762f0e090bc26f2546200,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172262064174
9083607,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90fe20ba3a1314e53eb4a1b834adcbbf,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158d622aed9a79cabdd29acb1449354000a5500e94b4ce4bb805d4b919f49439,PodSandboxId:b08c45a675b532dd7c8302a227735b183109fbee139b54920b94fbdf65735968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722620638737786218,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6,PodSandboxId:2f03523628a5ef263342e0ea8a644190931032104a376e1905ddccec32e34d31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722620638687123093,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722620638641647480,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722620638599194093,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernet
es.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e196fe0b-9cd2-4a6d-8da1-c027e16541e2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8fd869ff4b02d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   e8db151d94a97       busybox-fc5497c4f-wwdvm
	0b353e683c45c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   3c8b3d0b4534f       storage-provisioner
	c360a48ed21dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   236df4e4d374d       coredns-7db6d8ff4d-7bnn4
	122af758e0175       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   7a85af5981798       coredns-7db6d8ff4d-gzmsx
	e5737b2ef0345       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   93bf8df122de4       kindnet-bjrkb
	dbaf687f1fee9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   aa85cd011b109       kube-proxy-l7npk
	6144aba25daef       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   f70dac73be7d9       kube-vip-ha-652395
	158d622aed9a7       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   b08c45a675b53       kube-apiserver-ha-652395
	a3c95a2e3488e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   2f03523628a5e       kube-controller-manager-ha-652395
	c587c6ce09941       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   d14257a1927ee       kube-scheduler-ha-652395
	fae5bea03ccdc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   540d9595b8d86       etcd-ha-652395
	
	
	==> coredns [122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1] <==
	[INFO] 10.244.2.2:60449 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154708s
	[INFO] 10.244.2.2:59061 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000076226s
	[INFO] 10.244.2.2:55056 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153757s
	[INFO] 10.244.2.2:54378 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059203s
	[INFO] 10.244.0.4:54290 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133679s
	[INFO] 10.244.0.4:45555 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706989s
	[INFO] 10.244.0.4:53404 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000111941s
	[INFO] 10.244.0.4:37483 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045512s
	[INFO] 10.244.1.2:49967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130599s
	[INFO] 10.244.1.2:57007 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090105s
	[INFO] 10.244.1.2:43820 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110127s
	[INFO] 10.244.2.2:36224 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000096715s
	[INFO] 10.244.2.2:60973 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081675s
	[INFO] 10.244.0.4:40476 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189503s
	[INFO] 10.244.0.4:56165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046005s
	[INFO] 10.244.0.4:44437 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000034828s
	[INFO] 10.244.0.4:35238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000032154s
	[INFO] 10.244.1.2:56315 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166841s
	[INFO] 10.244.1.2:47239 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198329s
	[INFO] 10.244.1.2:57096 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123709s
	[INFO] 10.244.2.2:46134 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000490913s
	[INFO] 10.244.2.2:53250 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148459s
	[INFO] 10.244.0.4:56093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118253s
	[INFO] 10.244.0.4:34180 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008075s
	[INFO] 10.244.0.4:45410 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005242s
	
	
	==> coredns [c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e] <==
	[INFO] 10.244.1.2:54559 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.003157766s
	[INFO] 10.244.1.2:59747 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.001806815s
	[INFO] 10.244.2.2:50295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166931s
	[INFO] 10.244.2.2:41315 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000159243s
	[INFO] 10.244.2.2:36008 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.00010659s
	[INFO] 10.244.2.2:60572 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001796945s
	[INFO] 10.244.0.4:60264 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128512s
	[INFO] 10.244.0.4:53377 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000106189s
	[INFO] 10.244.0.4:40974 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000119601s
	[INFO] 10.244.1.2:34952 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129505s
	[INFO] 10.244.1.2:58425 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00370685s
	[INFO] 10.244.1.2:57393 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172166s
	[INFO] 10.244.2.2:37875 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001360319s
	[INFO] 10.244.2.2:40319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119258s
	[INFO] 10.244.0.4:41301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000086881s
	[INFO] 10.244.0.4:48861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00176135s
	[INFO] 10.244.0.4:55078 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000129582s
	[INFO] 10.244.0.4:37426 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138717s
	[INFO] 10.244.1.2:36979 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118362s
	[INFO] 10.244.2.2:57363 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012985s
	[INFO] 10.244.2.2:39508 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130428s
	[INFO] 10.244.1.2:35447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118938s
	[INFO] 10.244.2.2:32993 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168916s
	[INFO] 10.244.2.2:41103 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214849s
	[INFO] 10.244.0.4:36090 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133411s
	
	
	==> describe nodes <==
	Name:               ha-652395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T17_44_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:44:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:51:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:47:09 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:47:09 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:47:09 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:47:09 +0000   Fri, 02 Aug 2024 17:44:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-652395
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ba599bf07ef4e41ba86086b6ac2ff1a
	  System UUID:                5ba599bf-07ef-4e41-ba86-086b6ac2ff1a
	  Boot ID:                    ed33b037-d8f7-4cbf-a057-27f14a3cc7dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwdvm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 coredns-7db6d8ff4d-7bnn4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m7s
	  kube-system                 coredns-7db6d8ff4d-gzmsx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m7s
	  kube-system                 etcd-ha-652395                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m20s
	  kube-system                 kindnet-bjrkb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m7s
	  kube-system                 kube-apiserver-ha-652395             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-controller-manager-ha-652395    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-proxy-l7npk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	  kube-system                 kube-scheduler-ha-652395             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-vip-ha-652395                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m6s                   kube-proxy       
	  Normal  Starting                 7m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m26s (x7 over 7m26s)  kubelet          Node ha-652395 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m26s (x8 over 7m26s)  kubelet          Node ha-652395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m26s (x8 over 7m26s)  kubelet          Node ha-652395 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m20s                  kubelet          Node ha-652395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m20s                  kubelet          Node ha-652395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m20s                  kubelet          Node ha-652395 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m8s                   node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal  NodeReady                6m52s                  kubelet          Node ha-652395 status is now: NodeReady
	  Normal  RegisteredNode           5m56s                  node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	
	
	Name:               ha-652395-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T17_45_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:45:10 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:48:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 02 Aug 2024 17:47:13 +0000   Fri, 02 Aug 2024 17:48:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 02 Aug 2024 17:47:13 +0000   Fri, 02 Aug 2024 17:48:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 02 Aug 2024 17:47:13 +0000   Fri, 02 Aug 2024 17:48:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 02 Aug 2024 17:47:13 +0000   Fri, 02 Aug 2024 17:48:44 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-652395-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4562f021ca54cf29302ae6053b176ca
	  System UUID:                b4562f02-1ca5-4cf2-9302-ae6053b176ca
	  Boot ID:                    e7c511aa-dc1e-4298-ac46-9d614ab780c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4gkm6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-652395-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m12s
	  kube-system                 kindnet-7n2wh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m14s
	  kube-system                 kube-apiserver-ha-652395-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-controller-manager-ha-652395-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-proxy-rtbb6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-scheduler-ha-652395-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-vip-ha-652395-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m14s (x8 over 6m14s)  kubelet          Node ha-652395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s (x8 over 6m14s)  kubelet          Node ha-652395-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s (x7 over 6m14s)  kubelet          Node ha-652395-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m13s                  node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           5m56s                  node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  NodeNotReady             2m40s                  node-controller  Node ha-652395-m02 status is now: NodeNotReady
	
	
	Name:               ha-652395-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T17_46_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:46:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:51:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:47:27 +0000   Fri, 02 Aug 2024 17:46:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:47:27 +0000   Fri, 02 Aug 2024 17:46:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:47:27 +0000   Fri, 02 Aug 2024 17:46:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:47:27 +0000   Fri, 02 Aug 2024 17:46:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-652395-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98b40f3acdab4627b19b6017ea4f9a53
	  System UUID:                98b40f3a-cdab-4627-b19b-6017ea4f9a53
	  Boot ID:                    5e9d8bb1-9650-48d7-bddb-5da6b47ffd9e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lwm5m                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-652395-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m56s
	  kube-system                 kindnet-qw2hm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m58s
	  kube-system                 kube-apiserver-ha-652395-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-controller-manager-ha-652395-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-proxy-fgghw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-scheduler-ha-652395-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-vip-ha-652395-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m58s (x8 over 4m58s)  kubelet          Node ha-652395-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s (x8 over 4m58s)  kubelet          Node ha-652395-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s (x7 over 4m58s)  kubelet          Node ha-652395-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	
	
	Name:               ha-652395-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T17_47_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:47:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:51:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:48:00 +0000   Fri, 02 Aug 2024 17:47:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:48:00 +0000   Fri, 02 Aug 2024 17:47:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:48:00 +0000   Fri, 02 Aug 2024 17:47:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:48:00 +0000   Fri, 02 Aug 2024 17:47:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-652395-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 998c02abf56b4784b82e5c48780cf7d3
	  System UUID:                998c02ab-f56b-4784-b82e-5c48780cf7d3
	  Boot ID:                    775309da-8648-4a2a-9433-2f07263d9659
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nksdg       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-proxy-d44zn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m49s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x2 over 3m55s)  kubelet          Node ha-652395-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x2 over 3m55s)  kubelet          Node ha-652395-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x2 over 3m55s)  kubelet          Node ha-652395-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal  NodeReady                3m34s                  kubelet          Node ha-652395-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 2 17:43] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051087] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037656] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.691479] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.739202] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.520223] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.851587] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.054661] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055410] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.166920] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.132294] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.235363] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.898825] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +3.781164] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.056602] kauditd_printk_skb: 158 callbacks suppressed
	[Aug 2 17:44] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.095134] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.851149] kauditd_printk_skb: 18 callbacks suppressed
	[ +21.579996] kauditd_printk_skb: 38 callbacks suppressed
	[Aug 2 17:45] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1] <==
	{"level":"warn","ts":"2024-08-02T17:51:24.670467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.67902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.683573Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.700739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.708012Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.714289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.718575Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.72134Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.72957Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.738294Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.744252Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.747594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.750693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.756983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.762964Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.768371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.77033Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.773933Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.775088Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.778076Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.782749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.788291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.794289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.82212Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:51:24.841589Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:51:24 up 7 min,  0 users,  load average: 0.08, 0.20, 0.13
	Linux ha-652395 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1] <==
	I0802 17:50:52.527915       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:51:02.528539       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:51:02.528567       1 main.go:299] handling current node
	I0802 17:51:02.528592       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:51:02.528596       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:51:02.528731       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:51:02.528736       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:51:02.528807       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:51:02.528836       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:51:12.522951       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:51:12.523408       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:51:12.523617       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:51:12.523649       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:51:12.523761       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:51:12.523804       1 main.go:299] handling current node
	I0802 17:51:12.523834       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:51:12.523851       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:51:22.520673       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:51:22.520777       1 main.go:299] handling current node
	I0802 17:51:22.520819       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:51:22.520842       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:51:22.520990       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:51:22.521011       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:51:22.521088       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:51:22.521106       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [158d622aed9a79cabdd29acb1449354000a5500e94b4ce4bb805d4b919f49439] <==
	I0802 17:44:04.923350       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0802 17:44:04.938631       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0802 17:44:17.141121       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0802 17:44:17.219269       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0802 17:46:27.269704       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0802 17:46:27.270497       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0802 17:46:27.270506       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 218.432µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0802 17:46:27.272378       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0802 17:46:27.272560       1 timeout.go:142] post-timeout activity - time-elapsed: 3.01038ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0802 17:46:59.020907       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35408: use of closed network connection
	E0802 17:46:59.231600       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35434: use of closed network connection
	E0802 17:46:59.424159       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35456: use of closed network connection
	E0802 17:46:59.607761       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35472: use of closed network connection
	E0802 17:46:59.786874       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35486: use of closed network connection
	E0802 17:46:59.972651       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35508: use of closed network connection
	E0802 17:47:00.154198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35528: use of closed network connection
	E0802 17:47:00.320229       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35544: use of closed network connection
	E0802 17:47:00.496603       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35558: use of closed network connection
	E0802 17:47:00.784699       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35582: use of closed network connection
	E0802 17:47:00.956287       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35598: use of closed network connection
	E0802 17:47:01.157122       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35612: use of closed network connection
	E0802 17:47:01.325967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35626: use of closed network connection
	E0802 17:47:01.498415       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35656: use of closed network connection
	E0802 17:47:01.686970       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35662: use of closed network connection
	W0802 17:48:22.810965       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.210 192.168.39.62]
	
	
	==> kube-controller-manager [a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6] <==
	I0802 17:46:26.434062       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-652395-m03" podCIDRs=["10.244.2.0/24"]
	I0802 17:46:31.448149       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-652395-m03"
	I0802 17:46:54.718102       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.249834ms"
	I0802 17:46:54.746796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.546663ms"
	I0802 17:46:54.977547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="230.684094ms"
	I0802 17:46:55.051103       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.251999ms"
	I0802 17:46:55.083704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.495025ms"
	I0802 17:46:55.083824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.339µs"
	I0802 17:46:55.213621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.597634ms"
	I0802 17:46:55.213905       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="177.702µs"
	I0802 17:46:58.071723       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.562837ms"
	I0802 17:46:58.071875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.601µs"
	I0802 17:46:58.583792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.502132ms"
	E0802 17:46:58.583906       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0802 17:46:58.584055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.111µs"
	I0802 17:46:58.589593       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="216.816µs"
	E0802 17:47:29.961797       1 certificate_controller.go:146] Sync csr-lzxzq failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-lzxzq": the object has been modified; please apply your changes to the latest version and try again
	E0802 17:47:29.978174       1 certificate_controller.go:146] Sync csr-lzxzq failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-lzxzq": the object has been modified; please apply your changes to the latest version and try again
	I0802 17:47:30.240126       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-652395-m04\" does not exist"
	I0802 17:47:30.269420       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-652395-m04" podCIDRs=["10.244.3.0/24"]
	I0802 17:47:31.461708       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-652395-m04"
	I0802 17:47:50.185291       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-652395-m04"
	I0802 17:48:44.488719       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-652395-m04"
	I0802 17:48:44.647582       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.085066ms"
	I0802 17:48:44.647843       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.457µs"
	
	
	==> kube-proxy [dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d] <==
	I0802 17:44:18.175971       1 server_linux.go:69] "Using iptables proxy"
	I0802 17:44:18.192513       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.210"]
	I0802 17:44:18.232306       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 17:44:18.232344       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 17:44:18.232359       1 server_linux.go:165] "Using iptables Proxier"
	I0802 17:44:18.235019       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 17:44:18.235619       1 server.go:872] "Version info" version="v1.30.3"
	I0802 17:44:18.235694       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:44:18.237419       1 config.go:192] "Starting service config controller"
	I0802 17:44:18.237875       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 17:44:18.237930       1 config.go:101] "Starting endpoint slice config controller"
	I0802 17:44:18.237978       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 17:44:18.239116       1 config.go:319] "Starting node config controller"
	I0802 17:44:18.239152       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 17:44:18.338607       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0802 17:44:18.338702       1 shared_informer.go:320] Caches are synced for service config
	I0802 17:44:18.339243       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca] <==
	E0802 17:46:26.483599       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fgghw\": pod kube-proxy-fgghw is already assigned to node \"ha-652395-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fgghw" node="ha-652395-m03"
	E0802 17:46:26.484154       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8a72fb78-19f9-499b-943b-fd95b0da2994(kube-system/kube-proxy-fgghw) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-fgghw"
	E0802 17:46:26.484295       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fgghw\": pod kube-proxy-fgghw is already assigned to node \"ha-652395-m03\"" pod="kube-system/kube-proxy-fgghw"
	I0802 17:46:26.484397       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-fgghw" node="ha-652395-m03"
	E0802 17:46:26.488889       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qw2hm\": pod kindnet-qw2hm is already assigned to node \"ha-652395-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-qw2hm" node="ha-652395-m03"
	E0802 17:46:26.488934       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a2caca18-72b5-4bf1-8e8f-da4f91ff543e(kube-system/kindnet-qw2hm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qw2hm"
	E0802 17:46:26.488952       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qw2hm\": pod kindnet-qw2hm is already assigned to node \"ha-652395-m03\"" pod="kube-system/kindnet-qw2hm"
	I0802 17:46:26.488968       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qw2hm" node="ha-652395-m03"
	I0802 17:46:54.677737       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="dfefc05b-4ed9-4de9-b511-848735a02832" pod="default/busybox-fc5497c4f-4gkm6" assumedNode="ha-652395-m02" currentNode="ha-652395-m03"
	E0802 17:46:54.681524       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-4gkm6\": pod busybox-fc5497c4f-4gkm6 is already assigned to node \"ha-652395-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-4gkm6" node="ha-652395-m03"
	E0802 17:46:54.681606       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod dfefc05b-4ed9-4de9-b511-848735a02832(default/busybox-fc5497c4f-4gkm6) was assumed on ha-652395-m03 but assigned to ha-652395-m02" pod="default/busybox-fc5497c4f-4gkm6"
	E0802 17:46:54.681629       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-4gkm6\": pod busybox-fc5497c4f-4gkm6 is already assigned to node \"ha-652395-m02\"" pod="default/busybox-fc5497c4f-4gkm6"
	I0802 17:46:54.681665       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-4gkm6" node="ha-652395-m02"
	E0802 17:46:54.718965       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wwdvm\": pod busybox-fc5497c4f-wwdvm is already assigned to node \"ha-652395\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wwdvm" node="ha-652395"
	E0802 17:46:54.719105       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8d2d25e8-37d0-45c4-9b5a-9722d329d86f(default/busybox-fc5497c4f-wwdvm) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wwdvm"
	E0802 17:46:54.719159       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wwdvm\": pod busybox-fc5497c4f-wwdvm is already assigned to node \"ha-652395\"" pod="default/busybox-fc5497c4f-wwdvm"
	I0802 17:46:54.719234       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wwdvm" node="ha-652395"
	E0802 17:46:54.719601       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lwm5m\": pod busybox-fc5497c4f-lwm5m is already assigned to node \"ha-652395-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-lwm5m" node="ha-652395-m03"
	E0802 17:46:54.719665       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6389e9d8-4530-492e-8bc6-7bc9a6516f41(default/busybox-fc5497c4f-lwm5m) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-lwm5m"
	E0802 17:46:54.719697       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lwm5m\": pod busybox-fc5497c4f-lwm5m is already assigned to node \"ha-652395-m03\"" pod="default/busybox-fc5497c4f-lwm5m"
	I0802 17:46:54.719759       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-lwm5m" node="ha-652395-m03"
	E0802 17:47:30.336011       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-d44zn\": pod kube-proxy-d44zn is already assigned to node \"ha-652395-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-d44zn" node="ha-652395-m04"
	E0802 17:47:30.336363       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod d24eb3a9-0a5f-4f16-92f9-51cb43af681a(kube-system/kube-proxy-d44zn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-d44zn"
	E0802 17:47:30.336595       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-d44zn\": pod kube-proxy-d44zn is already assigned to node \"ha-652395-m04\"" pod="kube-system/kube-proxy-d44zn"
	I0802 17:47:30.336681       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-d44zn" node="ha-652395-m04"
	
	
	==> kubelet <==
	Aug 02 17:47:04 ha-652395 kubelet[1358]: E0802 17:47:04.856413    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:47:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:47:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:47:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:47:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:48:04 ha-652395 kubelet[1358]: E0802 17:48:04.857008    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:48:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:48:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:48:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:48:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:49:04 ha-652395 kubelet[1358]: E0802 17:49:04.856898    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:49:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:49:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:49:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:49:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:50:04 ha-652395 kubelet[1358]: E0802 17:50:04.862833    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:50:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:50:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:50:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:50:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:51:04 ha-652395 kubelet[1358]: E0802 17:51:04.857228    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:51:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:51:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:51:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:51:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-652395 -n ha-652395
helpers_test.go:261: (dbg) Run:  kubectl --context ha-652395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (407.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-652395 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-652395 -v=7 --alsologtostderr
E0802 17:52:43.927452   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:53:11.611321   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-652395 -v=7 --alsologtostderr: exit status 82 (2m1.767062233s)

                                                
                                                
-- stdout --
	* Stopping node "ha-652395-m04"  ...
	* Stopping node "ha-652395-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:51:26.214985   29122 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:51:26.215129   29122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:51:26.215141   29122 out.go:304] Setting ErrFile to fd 2...
	I0802 17:51:26.215148   29122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:51:26.215374   29122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:51:26.215633   29122 out.go:298] Setting JSON to false
	I0802 17:51:26.215730   29122 mustload.go:65] Loading cluster: ha-652395
	I0802 17:51:26.216078   29122 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:51:26.216172   29122 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:51:26.216381   29122 mustload.go:65] Loading cluster: ha-652395
	I0802 17:51:26.216539   29122 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:51:26.216586   29122 stop.go:39] StopHost: ha-652395-m04
	I0802 17:51:26.216980   29122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:26.217023   29122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:26.232312   29122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0802 17:51:26.232740   29122 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:26.233300   29122 main.go:141] libmachine: Using API Version  1
	I0802 17:51:26.233327   29122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:26.233664   29122 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:26.236259   29122 out.go:177] * Stopping node "ha-652395-m04"  ...
	I0802 17:51:26.237438   29122 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0802 17:51:26.237460   29122 main.go:141] libmachine: (ha-652395-m04) Calling .DriverName
	I0802 17:51:26.237676   29122 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0802 17:51:26.237706   29122 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHHostname
	I0802 17:51:26.240579   29122 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:26.241046   29122 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:47:16 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:51:26.241078   29122 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:51:26.241242   29122 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHPort
	I0802 17:51:26.241417   29122 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHKeyPath
	I0802 17:51:26.241593   29122 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHUsername
	I0802 17:51:26.241739   29122 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m04/id_rsa Username:docker}
	I0802 17:51:26.321670   29122 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0802 17:51:26.374600   29122 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0802 17:51:26.427342   29122 main.go:141] libmachine: Stopping "ha-652395-m04"...
	I0802 17:51:26.427381   29122 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 17:51:26.429011   29122 main.go:141] libmachine: (ha-652395-m04) Calling .Stop
	I0802 17:51:26.433063   29122 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 0/120
	I0802 17:51:27.523557   29122 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 17:51:27.524919   29122 main.go:141] libmachine: Machine "ha-652395-m04" was stopped.
	I0802 17:51:27.524936   29122 stop.go:75] duration metric: took 1.287506065s to stop
	I0802 17:51:27.524956   29122 stop.go:39] StopHost: ha-652395-m03
	I0802 17:51:27.525359   29122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:51:27.525403   29122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:51:27.541129   29122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44337
	I0802 17:51:27.541619   29122 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:51:27.542284   29122 main.go:141] libmachine: Using API Version  1
	I0802 17:51:27.542304   29122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:51:27.542653   29122 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:51:27.544439   29122 out.go:177] * Stopping node "ha-652395-m03"  ...
	I0802 17:51:27.545563   29122 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0802 17:51:27.545584   29122 main.go:141] libmachine: (ha-652395-m03) Calling .DriverName
	I0802 17:51:27.545818   29122 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0802 17:51:27.545840   29122 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHHostname
	I0802 17:51:27.549007   29122 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:27.549479   29122 main.go:141] libmachine: (ha-652395-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:60:5b", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:45:50 +0000 UTC Type:0 Mac:52:54:00:23:60:5b Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:ha-652395-m03 Clientid:01:52:54:00:23:60:5b}
	I0802 17:51:27.549499   29122 main.go:141] libmachine: (ha-652395-m03) DBG | domain ha-652395-m03 has defined IP address 192.168.39.62 and MAC address 52:54:00:23:60:5b in network mk-ha-652395
	I0802 17:51:27.549635   29122 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHPort
	I0802 17:51:27.549827   29122 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHKeyPath
	I0802 17:51:27.549953   29122 main.go:141] libmachine: (ha-652395-m03) Calling .GetSSHUsername
	I0802 17:51:27.550098   29122 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m03/id_rsa Username:docker}
	I0802 17:51:27.634124   29122 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0802 17:51:27.687283   29122 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0802 17:51:27.741918   29122 main.go:141] libmachine: Stopping "ha-652395-m03"...
	I0802 17:51:27.741948   29122 main.go:141] libmachine: (ha-652395-m03) Calling .GetState
	I0802 17:51:27.743633   29122 main.go:141] libmachine: (ha-652395-m03) Calling .Stop
	I0802 17:51:27.746857   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 0/120
	I0802 17:51:28.748181   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 1/120
	I0802 17:51:29.749624   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 2/120
	I0802 17:51:30.751185   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 3/120
	I0802 17:51:31.752603   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 4/120
	I0802 17:51:32.754509   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 5/120
	I0802 17:51:33.756068   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 6/120
	I0802 17:51:34.757411   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 7/120
	I0802 17:51:35.758967   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 8/120
	I0802 17:51:36.760409   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 9/120
	I0802 17:51:37.762508   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 10/120
	I0802 17:51:38.763980   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 11/120
	I0802 17:51:39.765654   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 12/120
	I0802 17:51:40.767262   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 13/120
	I0802 17:51:41.768916   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 14/120
	I0802 17:51:42.770945   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 15/120
	I0802 17:51:43.772581   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 16/120
	I0802 17:51:44.774155   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 17/120
	I0802 17:51:45.775753   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 18/120
	I0802 17:51:46.777577   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 19/120
	I0802 17:51:47.779714   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 20/120
	I0802 17:51:48.781035   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 21/120
	I0802 17:51:49.782448   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 22/120
	I0802 17:51:50.783899   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 23/120
	I0802 17:51:51.785725   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 24/120
	I0802 17:51:52.788299   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 25/120
	I0802 17:51:53.789896   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 26/120
	I0802 17:51:54.791363   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 27/120
	I0802 17:51:55.793216   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 28/120
	I0802 17:51:56.794742   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 29/120
	I0802 17:51:57.796732   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 30/120
	I0802 17:51:58.798156   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 31/120
	I0802 17:51:59.799739   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 32/120
	I0802 17:52:00.801335   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 33/120
	I0802 17:52:01.802759   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 34/120
	I0802 17:52:02.804624   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 35/120
	I0802 17:52:03.806113   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 36/120
	I0802 17:52:04.807496   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 37/120
	I0802 17:52:05.808804   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 38/120
	I0802 17:52:06.810239   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 39/120
	I0802 17:52:07.812312   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 40/120
	I0802 17:52:08.813614   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 41/120
	I0802 17:52:09.814923   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 42/120
	I0802 17:52:10.816385   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 43/120
	I0802 17:52:11.817886   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 44/120
	I0802 17:52:12.820386   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 45/120
	I0802 17:52:13.821809   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 46/120
	I0802 17:52:14.823356   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 47/120
	I0802 17:52:15.824736   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 48/120
	I0802 17:52:16.825919   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 49/120
	I0802 17:52:17.827679   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 50/120
	I0802 17:52:18.829072   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 51/120
	I0802 17:52:19.830826   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 52/120
	I0802 17:52:20.832260   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 53/120
	I0802 17:52:21.833754   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 54/120
	I0802 17:52:22.835588   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 55/120
	I0802 17:52:23.837848   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 56/120
	I0802 17:52:24.839384   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 57/120
	I0802 17:52:25.840669   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 58/120
	I0802 17:52:26.842184   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 59/120
	I0802 17:52:27.844184   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 60/120
	I0802 17:52:28.845697   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 61/120
	I0802 17:52:29.847527   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 62/120
	I0802 17:52:30.848938   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 63/120
	I0802 17:52:31.850210   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 64/120
	I0802 17:52:32.852249   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 65/120
	I0802 17:52:33.853765   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 66/120
	I0802 17:52:34.855180   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 67/120
	I0802 17:52:35.856375   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 68/120
	I0802 17:52:36.857774   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 69/120
	I0802 17:52:37.859423   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 70/120
	I0802 17:52:38.860863   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 71/120
	I0802 17:52:39.862203   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 72/120
	I0802 17:52:40.863596   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 73/120
	I0802 17:52:41.864951   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 74/120
	I0802 17:52:42.866666   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 75/120
	I0802 17:52:43.868101   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 76/120
	I0802 17:52:44.869510   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 77/120
	I0802 17:52:45.870825   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 78/120
	I0802 17:52:46.872089   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 79/120
	I0802 17:52:47.873691   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 80/120
	I0802 17:52:48.875271   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 81/120
	I0802 17:52:49.876750   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 82/120
	I0802 17:52:50.878299   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 83/120
	I0802 17:52:51.880020   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 84/120
	I0802 17:52:52.881798   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 85/120
	I0802 17:52:53.883412   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 86/120
	I0802 17:52:54.884786   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 87/120
	I0802 17:52:55.886284   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 88/120
	I0802 17:52:56.887851   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 89/120
	I0802 17:52:57.890237   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 90/120
	I0802 17:52:58.891601   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 91/120
	I0802 17:52:59.893256   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 92/120
	I0802 17:53:00.894534   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 93/120
	I0802 17:53:01.896145   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 94/120
	I0802 17:53:02.897825   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 95/120
	I0802 17:53:03.899247   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 96/120
	I0802 17:53:04.900616   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 97/120
	I0802 17:53:05.902116   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 98/120
	I0802 17:53:06.903376   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 99/120
	I0802 17:53:07.905315   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 100/120
	I0802 17:53:08.906521   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 101/120
	I0802 17:53:09.908106   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 102/120
	I0802 17:53:10.909294   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 103/120
	I0802 17:53:11.911213   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 104/120
	I0802 17:53:12.912581   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 105/120
	I0802 17:53:13.913945   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 106/120
	I0802 17:53:14.915204   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 107/120
	I0802 17:53:15.916821   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 108/120
	I0802 17:53:16.918087   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 109/120
	I0802 17:53:17.919833   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 110/120
	I0802 17:53:18.921107   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 111/120
	I0802 17:53:19.922440   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 112/120
	I0802 17:53:20.923769   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 113/120
	I0802 17:53:21.924956   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 114/120
	I0802 17:53:22.926976   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 115/120
	I0802 17:53:23.928559   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 116/120
	I0802 17:53:24.930031   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 117/120
	I0802 17:53:25.931488   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 118/120
	I0802 17:53:26.932691   29122 main.go:141] libmachine: (ha-652395-m03) Waiting for machine to stop 119/120
	I0802 17:53:27.933511   29122 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0802 17:53:27.933569   29122 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0802 17:53:27.935227   29122 out.go:177] 
	W0802 17:53:27.936394   29122 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0802 17:53:27.936409   29122 out.go:239] * 
	* 
	W0802 17:53:27.938515   29122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 17:53:27.940454   29122 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-652395 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-652395 --wait=true -v=7 --alsologtostderr
E0802 17:55:14.261335   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:56:37.304590   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:57:43.927327   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-652395 --wait=true -v=7 --alsologtostderr: (4m43.650246313s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-652395
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-652395 -n ha-652395
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-652395 logs -n 25: (1.734361641s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m02:/home/docker/cp-test_ha-652395-m03_ha-652395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m02 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m03_ha-652395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04:/home/docker/cp-test_ha-652395-m03_ha-652395-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m04 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m03_ha-652395-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp testdata/cp-test.txt                                                | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2210744680/001/cp-test_ha-652395-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395:/home/docker/cp-test_ha-652395-m04_ha-652395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395 sudo cat                                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m02:/home/docker/cp-test_ha-652395-m04_ha-652395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m02 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03:/home/docker/cp-test_ha-652395-m04_ha-652395-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m03 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-652395 node stop m02 -v=7                                                     | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-652395 node start m02 -v=7                                                    | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-652395 -v=7                                                           | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-652395 -v=7                                                                | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-652395 --wait=true -v=7                                                    | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:53 UTC | 02 Aug 24 17:58 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-652395                                                                | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:58 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:53:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:53:27.981884   29606 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:53:27.982006   29606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:53:27.982015   29606 out.go:304] Setting ErrFile to fd 2...
	I0802 17:53:27.982019   29606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:53:27.982188   29606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:53:27.982706   29606 out.go:298] Setting JSON to false
	I0802 17:53:27.983601   29606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2152,"bootTime":1722619056,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:53:27.983658   29606 start.go:139] virtualization: kvm guest
	I0802 17:53:27.985819   29606 out.go:177] * [ha-652395] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:53:27.987274   29606 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 17:53:27.987328   29606 notify.go:220] Checking for updates...
	I0802 17:53:27.989379   29606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:53:27.990537   29606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:53:27.991673   29606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:53:27.992821   29606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 17:53:27.994166   29606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 17:53:27.995890   29606 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:53:27.996047   29606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:53:27.996654   29606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:53:27.996708   29606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:53:28.012870   29606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0802 17:53:28.013340   29606 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:53:28.013941   29606 main.go:141] libmachine: Using API Version  1
	I0802 17:53:28.013960   29606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:53:28.014308   29606 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:53:28.014484   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:53:28.050139   29606 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 17:53:28.051480   29606 start.go:297] selected driver: kvm2
	I0802 17:53:28.051495   29606 start.go:901] validating driver "kvm2" against &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.222 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:53:28.051674   29606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 17:53:28.052111   29606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:53:28.052208   29606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:53:28.066763   29606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:53:28.067695   29606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:53:28.067729   29606 cni.go:84] Creating CNI manager for ""
	I0802 17:53:28.067739   29606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0802 17:53:28.067804   29606 start.go:340] cluster config:
	{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.222 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:53:28.067947   29606 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:53:28.069826   29606 out.go:177] * Starting "ha-652395" primary control-plane node in "ha-652395" cluster
	I0802 17:53:28.071228   29606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:53:28.071264   29606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 17:53:28.071270   29606 cache.go:56] Caching tarball of preloaded images
	I0802 17:53:28.071349   29606 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 17:53:28.071359   29606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 17:53:28.071485   29606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:53:28.071681   29606 start.go:360] acquireMachinesLock for ha-652395: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 17:53:28.071722   29606 start.go:364] duration metric: took 23.255µs to acquireMachinesLock for "ha-652395"
	I0802 17:53:28.071736   29606 start.go:96] Skipping create...Using existing machine configuration
	I0802 17:53:28.071744   29606 fix.go:54] fixHost starting: 
	I0802 17:53:28.072091   29606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:53:28.072128   29606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:53:28.086944   29606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41307
	I0802 17:53:28.087407   29606 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:53:28.087856   29606 main.go:141] libmachine: Using API Version  1
	I0802 17:53:28.087882   29606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:53:28.088378   29606 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:53:28.088611   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:53:28.088803   29606 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:53:28.090381   29606 fix.go:112] recreateIfNeeded on ha-652395: state=Running err=<nil>
	W0802 17:53:28.090397   29606 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 17:53:28.092460   29606 out.go:177] * Updating the running kvm2 "ha-652395" VM ...
	I0802 17:53:28.093857   29606 machine.go:94] provisionDockerMachine start ...
	I0802 17:53:28.093878   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:53:28.094078   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.096238   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.096670   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.096695   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.096819   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:53:28.096985   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.097131   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.097269   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:53:28.097446   29606 main.go:141] libmachine: Using SSH client type: native
	I0802 17:53:28.097645   29606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:53:28.097657   29606 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 17:53:28.208249   29606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-652395
	
	I0802 17:53:28.208283   29606 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:53:28.208541   29606 buildroot.go:166] provisioning hostname "ha-652395"
	I0802 17:53:28.208567   29606 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:53:28.208746   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.211460   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.211892   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.211926   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.212006   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:53:28.212226   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.212388   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.212557   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:53:28.212708   29606 main.go:141] libmachine: Using SSH client type: native
	I0802 17:53:28.212916   29606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:53:28.212936   29606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-652395 && echo "ha-652395" | sudo tee /etc/hostname
	I0802 17:53:28.334100   29606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-652395
	
	I0802 17:53:28.334125   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.336905   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.337255   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.337286   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.337483   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:53:28.337676   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.337832   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.337978   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:53:28.338129   29606 main.go:141] libmachine: Using SSH client type: native
	I0802 17:53:28.338293   29606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:53:28.338306   29606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-652395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-652395/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-652395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 17:53:28.456091   29606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:53:28.456128   29606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 17:53:28.456158   29606 buildroot.go:174] setting up certificates
	I0802 17:53:28.456167   29606 provision.go:84] configureAuth start
	I0802 17:53:28.456176   29606 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:53:28.456476   29606 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:53:28.459480   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.459901   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.459942   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.460045   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.462353   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.462722   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.462748   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.462894   29606 provision.go:143] copyHostCerts
	I0802 17:53:28.462934   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:53:28.462977   29606 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 17:53:28.462986   29606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:53:28.463062   29606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 17:53:28.463183   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:53:28.463205   29606 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 17:53:28.463210   29606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:53:28.463240   29606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 17:53:28.463360   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:53:28.463380   29606 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 17:53:28.463384   29606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:53:28.463411   29606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 17:53:28.463486   29606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.ha-652395 san=[127.0.0.1 192.168.39.210 ha-652395 localhost minikube]
	I0802 17:53:28.736655   29606 provision.go:177] copyRemoteCerts
	I0802 17:53:28.736713   29606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 17:53:28.736735   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.739291   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.739665   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.739695   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.739943   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:53:28.740145   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.740290   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:53:28.740431   29606 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:53:28.827264   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0802 17:53:28.827360   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0802 17:53:28.854459   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0802 17:53:28.854527   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 17:53:28.881733   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0802 17:53:28.881799   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 17:53:28.914763   29606 provision.go:87] duration metric: took 458.575876ms to configureAuth
	I0802 17:53:28.914800   29606 buildroot.go:189] setting minikube options for container-runtime
	I0802 17:53:28.915004   29606 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:53:28.915078   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.917915   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.918350   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.918376   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.918566   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:53:28.918792   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.918978   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.919153   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:53:28.919330   29606 main.go:141] libmachine: Using SSH client type: native
	I0802 17:53:28.919572   29606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:53:28.919604   29606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 17:54:59.845145   29606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 17:54:59.845178   29606 machine.go:97] duration metric: took 1m31.751305873s to provisionDockerMachine
	I0802 17:54:59.845189   29606 start.go:293] postStartSetup for "ha-652395" (driver="kvm2")
	I0802 17:54:59.845201   29606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 17:54:59.845216   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:54:59.845526   29606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 17:54:59.845552   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:54:59.848564   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:54:59.848960   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:54:59.848982   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:54:59.849158   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:54:59.849340   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:54:59.849513   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:54:59.849618   29606 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:54:59.935254   29606 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 17:54:59.939326   29606 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 17:54:59.939361   29606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 17:54:59.939453   29606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 17:54:59.939533   29606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 17:54:59.939543   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /etc/ssl/certs/125472.pem
	I0802 17:54:59.939619   29606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 17:54:59.948795   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:54:59.971233   29606 start.go:296] duration metric: took 126.028004ms for postStartSetup
	I0802 17:54:59.971279   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:54:59.971560   29606 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0802 17:54:59.971586   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:54:59.974208   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:54:59.974563   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:54:59.974593   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:54:59.974731   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:54:59.974901   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:54:59.975057   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:54:59.975208   29606 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	W0802 17:55:00.057384   29606 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0802 17:55:00.057413   29606 fix.go:56] duration metric: took 1m31.985669191s for fixHost
	I0802 17:55:00.057482   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:55:00.059946   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.060261   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:55:00.060293   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.060387   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:55:00.060564   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:55:00.060733   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:55:00.060851   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:55:00.061015   29606 main.go:141] libmachine: Using SSH client type: native
	I0802 17:55:00.061204   29606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:55:00.061217   29606 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 17:55:00.172476   29606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722621300.128893529
	
	I0802 17:55:00.172499   29606 fix.go:216] guest clock: 1722621300.128893529
	I0802 17:55:00.172509   29606 fix.go:229] Guest: 2024-08-02 17:55:00.128893529 +0000 UTC Remote: 2024-08-02 17:55:00.057431605 +0000 UTC m=+92.108375435 (delta=71.461924ms)
	I0802 17:55:00.172556   29606 fix.go:200] guest clock delta is within tolerance: 71.461924ms
	I0802 17:55:00.172561   29606 start.go:83] releasing machines lock for "ha-652395", held for 1m32.100830528s
	I0802 17:55:00.172583   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:55:00.172850   29606 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:55:00.175735   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.176184   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:55:00.176212   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.176411   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:55:00.176842   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:55:00.177008   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:55:00.177107   29606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 17:55:00.177146   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:55:00.177199   29606 ssh_runner.go:195] Run: cat /version.json
	I0802 17:55:00.177219   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:55:00.179870   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.180213   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.180269   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:55:00.180293   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.180455   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:55:00.180584   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:55:00.180613   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.180638   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:55:00.180762   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:55:00.180825   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:55:00.180932   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:55:00.181057   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:55:00.181107   29606 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:55:00.181198   29606 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:55:00.334805   29606 ssh_runner.go:195] Run: systemctl --version
	I0802 17:55:00.344087   29606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 17:55:00.496043   29606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 17:55:00.501429   29606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 17:55:00.501493   29606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 17:55:00.510411   29606 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0802 17:55:00.510436   29606 start.go:495] detecting cgroup driver to use...
	I0802 17:55:00.510505   29606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 17:55:00.526229   29606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:55:00.540873   29606 docker.go:217] disabling cri-docker service (if available) ...
	I0802 17:55:00.540928   29606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 17:55:00.554608   29606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 17:55:00.568179   29606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 17:55:00.715955   29606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 17:55:00.856815   29606 docker.go:233] disabling docker service ...
	I0802 17:55:00.856879   29606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 17:55:00.872630   29606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 17:55:00.885780   29606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 17:55:01.027040   29606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 17:55:01.169656   29606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 17:55:01.184009   29606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:55:01.204128   29606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 17:55:01.204202   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.214292   29606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 17:55:01.214362   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.224034   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.233747   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.243864   29606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 17:55:01.253727   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.263338   29606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.273983   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.284106   29606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 17:55:01.292917   29606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 17:55:01.302295   29606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:55:01.440061   29606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 17:55:01.724369   29606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 17:55:01.724443   29606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 17:55:01.729937   29606 start.go:563] Will wait 60s for crictl version
	I0802 17:55:01.730002   29606 ssh_runner.go:195] Run: which crictl
	I0802 17:55:01.733602   29606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 17:55:01.768060   29606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 17:55:01.768147   29606 ssh_runner.go:195] Run: crio --version
	I0802 17:55:01.795814   29606 ssh_runner.go:195] Run: crio --version
	I0802 17:55:01.825284   29606 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 17:55:01.826741   29606 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:55:01.829259   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:01.829696   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:55:01.829721   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:01.829918   29606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 17:55:01.834385   29606 kubeadm.go:883] updating cluster {Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.222 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 17:55:01.834510   29606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:55:01.834563   29606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:55:01.879763   29606 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 17:55:01.879782   29606 crio.go:433] Images already preloaded, skipping extraction
	I0802 17:55:01.879831   29606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:55:01.918899   29606 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 17:55:01.918922   29606 cache_images.go:84] Images are preloaded, skipping loading
	I0802 17:55:01.918931   29606 kubeadm.go:934] updating node { 192.168.39.210 8443 v1.30.3 crio true true} ...
	I0802 17:55:01.919041   29606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-652395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 17:55:01.919121   29606 ssh_runner.go:195] Run: crio config
	I0802 17:55:01.966895   29606 cni.go:84] Creating CNI manager for ""
	I0802 17:55:01.966917   29606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0802 17:55:01.966929   29606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 17:55:01.967008   29606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-652395 NodeName:ha-652395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 17:55:01.967262   29606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-652395"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 17:55:01.967330   29606 kube-vip.go:115] generating kube-vip config ...
	I0802 17:55:01.967383   29606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0802 17:55:01.978137   29606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0802 17:55:01.978252   29606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0802 17:55:01.978342   29606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 17:55:01.987305   29606 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 17:55:01.987423   29606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0802 17:55:01.996160   29606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0802 17:55:02.012102   29606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 17:55:02.027941   29606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0802 17:55:02.043509   29606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0802 17:55:02.060749   29606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0802 17:55:02.064815   29606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:55:02.212935   29606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:55:02.282071   29606 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395 for IP: 192.168.39.210
	I0802 17:55:02.282098   29606 certs.go:194] generating shared ca certs ...
	I0802 17:55:02.282119   29606 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:55:02.282345   29606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 17:55:02.282401   29606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 17:55:02.282424   29606 certs.go:256] generating profile certs ...
	I0802 17:55:02.282549   29606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key
	I0802 17:55:02.282587   29606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.e64b3c7c
	I0802 17:55:02.282608   29606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.e64b3c7c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.210 192.168.39.220 192.168.39.62 192.168.39.254]
	I0802 17:55:02.436648   29606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.e64b3c7c ...
	I0802 17:55:02.436681   29606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.e64b3c7c: {Name:mk30a71839e34750fa7129e3bd9f1af0592219af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:55:02.436853   29606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.e64b3c7c ...
	I0802 17:55:02.436865   29606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.e64b3c7c: {Name:mkcf581c5b6beb3c065bad1c59e6accde21cde4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:55:02.436930   29606 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.e64b3c7c -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt
	I0802 17:55:02.437081   29606 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.e64b3c7c -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key
	I0802 17:55:02.437205   29606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key
	I0802 17:55:02.437225   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0802 17:55:02.437238   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0802 17:55:02.437250   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0802 17:55:02.437264   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0802 17:55:02.437282   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0802 17:55:02.437317   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0802 17:55:02.437335   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0802 17:55:02.437345   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0802 17:55:02.437399   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 17:55:02.437432   29606 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 17:55:02.437441   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 17:55:02.437460   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 17:55:02.437482   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 17:55:02.437503   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 17:55:02.437541   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:55:02.437568   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem -> /usr/share/ca-certificates/12547.pem
	I0802 17:55:02.437582   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /usr/share/ca-certificates/125472.pem
	I0802 17:55:02.437594   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:55:02.438119   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 17:55:02.694023   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 17:55:02.771506   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 17:55:02.879030   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 17:55:02.997040   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0802 17:55:03.139689   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 17:55:03.191837   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 17:55:03.286648   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 17:55:03.429902   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 17:55:03.504973   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 17:55:03.583177   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 17:55:03.621205   29606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 17:55:03.641979   29606 ssh_runner.go:195] Run: openssl version
	I0802 17:55:03.648479   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 17:55:03.664674   29606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 17:55:03.669556   29606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 17:55:03.669621   29606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 17:55:03.676464   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 17:55:03.693337   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 17:55:03.712266   29606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 17:55:03.723934   29606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 17:55:03.723995   29606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 17:55:03.733929   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 17:55:03.751717   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 17:55:03.775569   29606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:55:03.789593   29606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:55:03.789651   29606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:55:03.805417   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 17:55:03.819473   29606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 17:55:03.824725   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 17:55:03.833108   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 17:55:03.843492   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 17:55:03.852286   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 17:55:03.862183   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 17:55:03.872172   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 17:55:03.879528   29606 kubeadm.go:392] StartCluster: {Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.222 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:55:03.879687   29606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 17:55:03.879748   29606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 17:55:03.936763   29606 cri.go:89] found id: "211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93"
	I0802 17:55:03.936785   29606 cri.go:89] found id: "4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d"
	I0802 17:55:03.936808   29606 cri.go:89] found id: "86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82"
	I0802 17:55:03.936813   29606 cri.go:89] found id: "bf950c5d12e435f630c4b4c3abcb6a81923d57812df9231b4238094d723c3c5c"
	I0802 17:55:03.936817   29606 cri.go:89] found id: "a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4"
	I0802 17:55:03.936822   29606 cri.go:89] found id: "c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6"
	I0802 17:55:03.936859   29606 cri.go:89] found id: "219d7f25bcfd6e77de5845534f7aaf968d2d78f12867c3527ea9e51c861bdaa8"
	I0802 17:55:03.936871   29606 cri.go:89] found id: "fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf"
	I0802 17:55:03.936876   29606 cri.go:89] found id: "26f9dbb4e53b68e2ae8d51792f99b7f0ed2cc9b696a1ff5456b7e7684f96f87f"
	I0802 17:55:03.936883   29606 cri.go:89] found id: "044a175eb50533624b826a3c1d1aa52bb8d46178b9a460454508b7721c870c20"
	I0802 17:55:03.936887   29606 cri.go:89] found id: "d809bfdbc457e4365c2eedbffa0f6ac8e940d0597edea05a183fb77ce8c6937d"
	I0802 17:55:03.936892   29606 cri.go:89] found id: "131024fd4f59ee579527315d5b100fb042ffd52f2030700b6c8d0d77872ee0e5"
	I0802 17:55:03.936897   29606 cri.go:89] found id: "c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e"
	I0802 17:55:03.936903   29606 cri.go:89] found id: "122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1"
	I0802 17:55:03.936911   29606 cri.go:89] found id: "e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1"
	I0802 17:55:03.936915   29606 cri.go:89] found id: "dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d"
	I0802 17:55:03.936919   29606 cri.go:89] found id: "a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6"
	I0802 17:55:03.936931   29606 cri.go:89] found id: "c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca"
	I0802 17:55:03.936938   29606 cri.go:89] found id: "fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1"
	I0802 17:55:03.936942   29606 cri.go:89] found id: ""
	I0802 17:55:03.936987   29606 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.254340182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621492254314550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=618f27ee-d235-4ab7-888b-a2fc0d49fda1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.254891103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3108645d-3036-492e-b59a-8c44eab1704c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.254959775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3108645d-3036-492e-b59a-8c44eab1704c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.255376921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7d6d025dc3e8c1458f36dd96ba3669dda736544c57e2651dd182db499a629be,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722621384862059853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7d5f519122fc0e393279d94de214bed4cabe4208bf1906b83c79263052a52a,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722621374866993808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0b5e102e9fbadea3e3c0104ad4c5398e9b7b7c25600a93f4dd759b6b425a1,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722621356858106032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722621344859737839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8effce7b51652c72ad93455ab4157ba7bad4e23466ab47df9170367cf0f6bf3a,PodSandboxId:5aa357a4cd3197f10b4c75df55d57d7c5a5904b7b2f2dd5e6cf9b511a7d2adc3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722621336146103449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e125f9f2e129e9b4cdf81d79c93193ed41662eab1d95610accfb7b8b24d88a5,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722621329857387182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d0a59311a1ca72e6192fb90233a279bd12fd5f8830d77341397664b0ffc5bd,PodSandboxId:80cea65b465eee4b30484f0dcb6d09e7d506d6d41378739f80b6cd26af9e80c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722621318661187681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d91cad7af9509d134761d7a124551,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722621303333952441,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93,PodSandboxId:fd6f55a18f711e046686b51d3c95c93b9a247566a863611e18d5ce485b3bf9cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722621303338825108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82,PodSandboxId:016ccc975574701510dddec56eafd3ce51bdab0008015e3f7c4c7107427c4945,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722621303230901042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf950c5d12e435f630c4b4c3abcb6a8
1923d57812df9231b4238094d723c3c5c,PodSandboxId:87acad60b8a8730be58c0d88ea8de02091f8644e2fa012b161c4863176726b41,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722621302973026696,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4,
PodSandboxId:03e1788bd730df53342906be7d58e184c84d923f9dc4f99a879ff16c703ae995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722621302958928266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6,PodSandboxId:6c4e1481ad362c4d14cb
ca4551d4efa32dd8abd389043c0e1419f36d541043b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302861002758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf,PodSandboxId:8608d21543358f2b9c4d6560a419e974a9cb7c9aa201d7582ad42ef2643b461e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302774016319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722620817831344179,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673204775483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubern
etes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673178729183,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722620661418686737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722620657834190040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722620638641748836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722620638599921977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3108645d-3036-492e-b59a-8c44eab1704c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.292701465Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5bf64388-b65c-4f4f-b298-286607ee08f3 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.292777787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5bf64388-b65c-4f4f-b298-286607ee08f3 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.294106171Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6148b827-ee62-48a8-97af-e981f418424a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.294789261Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621492294751899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6148b827-ee62-48a8-97af-e981f418424a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.295301398Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90440029-d53b-499e-94cf-18f65780bba4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.295371032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90440029-d53b-499e-94cf-18f65780bba4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.295875708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7d6d025dc3e8c1458f36dd96ba3669dda736544c57e2651dd182db499a629be,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722621384862059853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7d5f519122fc0e393279d94de214bed4cabe4208bf1906b83c79263052a52a,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722621374866993808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0b5e102e9fbadea3e3c0104ad4c5398e9b7b7c25600a93f4dd759b6b425a1,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722621356858106032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722621344859737839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8effce7b51652c72ad93455ab4157ba7bad4e23466ab47df9170367cf0f6bf3a,PodSandboxId:5aa357a4cd3197f10b4c75df55d57d7c5a5904b7b2f2dd5e6cf9b511a7d2adc3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722621336146103449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e125f9f2e129e9b4cdf81d79c93193ed41662eab1d95610accfb7b8b24d88a5,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722621329857387182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d0a59311a1ca72e6192fb90233a279bd12fd5f8830d77341397664b0ffc5bd,PodSandboxId:80cea65b465eee4b30484f0dcb6d09e7d506d6d41378739f80b6cd26af9e80c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722621318661187681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d91cad7af9509d134761d7a124551,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722621303333952441,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93,PodSandboxId:fd6f55a18f711e046686b51d3c95c93b9a247566a863611e18d5ce485b3bf9cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722621303338825108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82,PodSandboxId:016ccc975574701510dddec56eafd3ce51bdab0008015e3f7c4c7107427c4945,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722621303230901042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf950c5d12e435f630c4b4c3abcb6a8
1923d57812df9231b4238094d723c3c5c,PodSandboxId:87acad60b8a8730be58c0d88ea8de02091f8644e2fa012b161c4863176726b41,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722621302973026696,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4,
PodSandboxId:03e1788bd730df53342906be7d58e184c84d923f9dc4f99a879ff16c703ae995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722621302958928266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6,PodSandboxId:6c4e1481ad362c4d14cb
ca4551d4efa32dd8abd389043c0e1419f36d541043b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302861002758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf,PodSandboxId:8608d21543358f2b9c4d6560a419e974a9cb7c9aa201d7582ad42ef2643b461e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302774016319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722620817831344179,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673204775483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubern
etes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673178729183,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722620661418686737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722620657834190040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722620638641748836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722620638599921977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90440029-d53b-499e-94cf-18f65780bba4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.341859430Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff65b61b-c955-43ec-9752-4f49bfbd6b10 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.341943567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff65b61b-c955-43ec-9752-4f49bfbd6b10 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.342851502Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bed9290-de4a-4825-983b-df6b3d5152a0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.343295983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621492343275550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bed9290-de4a-4825-983b-df6b3d5152a0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.343933672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d9840d6-e2cc-4c8b-aef7-1479c2e0da8f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.343996505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d9840d6-e2cc-4c8b-aef7-1479c2e0da8f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.344409728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7d6d025dc3e8c1458f36dd96ba3669dda736544c57e2651dd182db499a629be,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722621384862059853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7d5f519122fc0e393279d94de214bed4cabe4208bf1906b83c79263052a52a,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722621374866993808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0b5e102e9fbadea3e3c0104ad4c5398e9b7b7c25600a93f4dd759b6b425a1,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722621356858106032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722621344859737839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8effce7b51652c72ad93455ab4157ba7bad4e23466ab47df9170367cf0f6bf3a,PodSandboxId:5aa357a4cd3197f10b4c75df55d57d7c5a5904b7b2f2dd5e6cf9b511a7d2adc3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722621336146103449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e125f9f2e129e9b4cdf81d79c93193ed41662eab1d95610accfb7b8b24d88a5,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722621329857387182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d0a59311a1ca72e6192fb90233a279bd12fd5f8830d77341397664b0ffc5bd,PodSandboxId:80cea65b465eee4b30484f0dcb6d09e7d506d6d41378739f80b6cd26af9e80c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722621318661187681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d91cad7af9509d134761d7a124551,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722621303333952441,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93,PodSandboxId:fd6f55a18f711e046686b51d3c95c93b9a247566a863611e18d5ce485b3bf9cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722621303338825108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82,PodSandboxId:016ccc975574701510dddec56eafd3ce51bdab0008015e3f7c4c7107427c4945,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722621303230901042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf950c5d12e435f630c4b4c3abcb6a8
1923d57812df9231b4238094d723c3c5c,PodSandboxId:87acad60b8a8730be58c0d88ea8de02091f8644e2fa012b161c4863176726b41,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722621302973026696,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4,
PodSandboxId:03e1788bd730df53342906be7d58e184c84d923f9dc4f99a879ff16c703ae995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722621302958928266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6,PodSandboxId:6c4e1481ad362c4d14cb
ca4551d4efa32dd8abd389043c0e1419f36d541043b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302861002758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf,PodSandboxId:8608d21543358f2b9c4d6560a419e974a9cb7c9aa201d7582ad42ef2643b461e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302774016319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722620817831344179,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673204775483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubern
etes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673178729183,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722620661418686737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722620657834190040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722620638641748836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722620638599921977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d9840d6-e2cc-4c8b-aef7-1479c2e0da8f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.393612166Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6a78e37-6a88-4bff-a9d8-8932e5ca1328 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.393688825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6a78e37-6a88-4bff-a9d8-8932e5ca1328 name=/runtime.v1.RuntimeService/Version
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.394938895Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=977bd82d-4320-4fab-b7d4-2bfcf12aa9b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.395535216Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621492395502507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=977bd82d-4320-4fab-b7d4-2bfcf12aa9b1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.396367330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c5222e3-a1dd-4660-8a3e-02458f3a9b13 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.396461997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c5222e3-a1dd-4660-8a3e-02458f3a9b13 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 17:58:12 ha-652395 crio[3732]: time="2024-08-02 17:58:12.396958933Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7d6d025dc3e8c1458f36dd96ba3669dda736544c57e2651dd182db499a629be,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722621384862059853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7d5f519122fc0e393279d94de214bed4cabe4208bf1906b83c79263052a52a,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722621374866993808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0b5e102e9fbadea3e3c0104ad4c5398e9b7b7c25600a93f4dd759b6b425a1,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722621356858106032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722621344859737839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8effce7b51652c72ad93455ab4157ba7bad4e23466ab47df9170367cf0f6bf3a,PodSandboxId:5aa357a4cd3197f10b4c75df55d57d7c5a5904b7b2f2dd5e6cf9b511a7d2adc3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722621336146103449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e125f9f2e129e9b4cdf81d79c93193ed41662eab1d95610accfb7b8b24d88a5,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722621329857387182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d0a59311a1ca72e6192fb90233a279bd12fd5f8830d77341397664b0ffc5bd,PodSandboxId:80cea65b465eee4b30484f0dcb6d09e7d506d6d41378739f80b6cd26af9e80c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722621318661187681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d91cad7af9509d134761d7a124551,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722621303333952441,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93,PodSandboxId:fd6f55a18f711e046686b51d3c95c93b9a247566a863611e18d5ce485b3bf9cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722621303338825108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82,PodSandboxId:016ccc975574701510dddec56eafd3ce51bdab0008015e3f7c4c7107427c4945,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722621303230901042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf950c5d12e435f630c4b4c3abcb6a8
1923d57812df9231b4238094d723c3c5c,PodSandboxId:87acad60b8a8730be58c0d88ea8de02091f8644e2fa012b161c4863176726b41,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722621302973026696,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4,
PodSandboxId:03e1788bd730df53342906be7d58e184c84d923f9dc4f99a879ff16c703ae995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722621302958928266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6,PodSandboxId:6c4e1481ad362c4d14cb
ca4551d4efa32dd8abd389043c0e1419f36d541043b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302861002758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf,PodSandboxId:8608d21543358f2b9c4d6560a419e974a9cb7c9aa201d7582ad42ef2643b461e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302774016319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722620817831344179,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673204775483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubern
etes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673178729183,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722620661418686737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722620657834190040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722620638641748836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722620638599921977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c5222e3-a1dd-4660-8a3e-02458f3a9b13 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f7d6d025dc3e8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   3                   704bed46ab9f1       kube-controller-manager-ha-652395
	2a7d5f519122f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   df01db970890c       storage-provisioner
	d3d0b5e102e9f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Running             kube-apiserver            3                   78be7e219081e       kube-apiserver-ha-652395
	b764e2109a4e9       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   2                   704bed46ab9f1       kube-controller-manager-ha-652395
	8effce7b51652       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   5aa357a4cd319       busybox-fc5497c4f-wwdvm
	9e125f9f2e129       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   df01db970890c       storage-provisioner
	75d0a59311a1c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   80cea65b465ee       kube-vip-ha-652395
	211084ef30ab2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      3 minutes ago        Running             kube-scheduler            1                   fd6f55a18f711       kube-scheduler-ha-652395
	4c17f3881c093       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      3 minutes ago        Exited              kube-apiserver            2                   78be7e219081e       kube-apiserver-ha-652395
	86e9c6b3f3798       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago        Running             etcd                      1                   016ccc9755747       etcd-ha-652395
	bf950c5d12e43       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      3 minutes ago        Running             kindnet-cni               1                   87acad60b8a87       kindnet-bjrkb
	a6e31c0eb2882       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      3 minutes ago        Running             kube-proxy                1                   03e1788bd730d       kube-proxy-l7npk
	c03f76e97b2f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   6c4e1481ad362       coredns-7db6d8ff4d-gzmsx
	fefd10fbf07b7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   8608d21543358       coredns-7db6d8ff4d-7bnn4
	8fd869ff4b02d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   e8db151d94a97       busybox-fc5497c4f-wwdvm
	c360a48ed21dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   236df4e4d374d       coredns-7db6d8ff4d-7bnn4
	122af758e0175       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   7a85af5981798       coredns-7db6d8ff4d-gzmsx
	e5737b2ef0345       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   93bf8df122de4       kindnet-bjrkb
	dbaf687f1fee9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   aa85cd011b109       kube-proxy-l7npk
	c587c6ce09941       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   d14257a1927ee       kube-scheduler-ha-652395
	fae5bea03ccdc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   540d9595b8d86       etcd-ha-652395
	
	
	==> coredns [122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1] <==
	[INFO] 10.244.0.4:56165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046005s
	[INFO] 10.244.0.4:44437 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000034828s
	[INFO] 10.244.0.4:35238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000032154s
	[INFO] 10.244.1.2:56315 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166841s
	[INFO] 10.244.1.2:47239 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198329s
	[INFO] 10.244.1.2:57096 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123709s
	[INFO] 10.244.2.2:46134 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000490913s
	[INFO] 10.244.2.2:53250 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148459s
	[INFO] 10.244.0.4:56093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118253s
	[INFO] 10.244.0.4:34180 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008075s
	[INFO] 10.244.0.4:45410 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005242s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1903&timeout=9m11s&timeoutSeconds=551&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1554461129]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (02-Aug-2024 17:53:14.855) (total time: 12692ms):
	Trace[1554461129]: ---"Objects listed" error:Unauthorized 12692ms (17:53:27.547)
	Trace[1554461129]: [12.692708827s] [12.692708827s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42778->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42778->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e] <==
	[INFO] 10.244.0.4:37426 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138717s
	[INFO] 10.244.1.2:36979 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118362s
	[INFO] 10.244.2.2:57363 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012985s
	[INFO] 10.244.2.2:39508 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130428s
	[INFO] 10.244.1.2:35447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118938s
	[INFO] 10.244.2.2:32993 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168916s
	[INFO] 10.244.2.2:41103 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214849s
	[INFO] 10.244.0.4:36090 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133411s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1856&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1903&timeout=6m18s&timeoutSeconds=378&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[329143856]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (02-Aug-2024 17:53:14.884) (total time: 12661ms):
	Trace[329143856]: ---"Objects listed" error:Unauthorized 12661ms (17:53:27.545)
	Trace[329143856]: [12.661539453s] [12.661539453s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1310938926]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (02-Aug-2024 17:53:14.550) (total time: 12995ms):
	Trace[1310938926]: ---"Objects listed" error:Unauthorized 12995ms (17:53:27.545)
	Trace[1310938926]: [12.995424272s] [12.995424272s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-652395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T17_44_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:44:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:58:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:55:50 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:55:50 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:55:50 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:55:50 +0000   Fri, 02 Aug 2024 17:44:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-652395
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ba599bf07ef4e41ba86086b6ac2ff1a
	  System UUID:                5ba599bf-07ef-4e41-ba86-086b6ac2ff1a
	  Boot ID:                    ed33b037-d8f7-4cbf-a057-27f14a3cc7dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwdvm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-7bnn4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-gzmsx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-652395                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-bjrkb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-652395             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-652395    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-l7npk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-652395             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-652395                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m26s              kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-652395 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-652395 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-652395 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                kubelet          Node ha-652395 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node ha-652395 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node ha-652395 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal   NodeReady                13m                kubelet          Node ha-652395 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Warning  ContainerGCFailed        4m8s               kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m18s              node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal   RegisteredNode           31s                node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	
	
	Name:               ha-652395-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T17_45_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:58:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:56:32 +0000   Fri, 02 Aug 2024 17:55:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:56:32 +0000   Fri, 02 Aug 2024 17:55:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:56:32 +0000   Fri, 02 Aug 2024 17:55:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:56:32 +0000   Fri, 02 Aug 2024 17:55:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-652395-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4562f021ca54cf29302ae6053b176ca
	  System UUID:                b4562f02-1ca5-4cf2-9302-ae6053b176ca
	  Boot ID:                    a9ea8acb-21c4-41a3-adad-896284e4b57f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4gkm6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-652395-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-7n2wh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-652395-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-652395-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-rtbb6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-652395-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-652395-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-652395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-652395-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-652395-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  NodeNotReady             9m28s                  node-controller  Node ha-652395-m02 status is now: NodeNotReady
	  Normal  Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m48s (x8 over 2m48s)  kubelet          Node ha-652395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m48s (x8 over 2m48s)  kubelet          Node ha-652395-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m48s (x7 over 2m48s)  kubelet          Node ha-652395-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m18s                  node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           95s                    node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           31s                    node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	
	
	Name:               ha-652395-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T17_46_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:46:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:58:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:57:47 +0000   Fri, 02 Aug 2024 17:57:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:57:47 +0000   Fri, 02 Aug 2024 17:57:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:57:47 +0000   Fri, 02 Aug 2024 17:57:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:57:47 +0000   Fri, 02 Aug 2024 17:57:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    ha-652395-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98b40f3acdab4627b19b6017ea4f9a53
	  System UUID:                98b40f3a-cdab-4627-b19b-6017ea4f9a53
	  Boot ID:                    457eaafd-d2b7-4b60-9d96-63a08e32737f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lwm5m                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-652395-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-qw2hm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-652395-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-652395-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-fgghw                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-652395-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-652395-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 40s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-652395-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-652395-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-652395-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	  Normal   RegisteredNode           2m18s              node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	  Normal   NodeNotReady             98s                node-controller  Node ha-652395-m03 status is now: NodeNotReady
	  Normal   RegisteredNode           95s                node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	  Normal   Starting                 56s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  56s (x2 over 56s)  kubelet          Node ha-652395-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    56s (x2 over 56s)  kubelet          Node ha-652395-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     56s (x2 over 56s)  kubelet          Node ha-652395-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 56s                kubelet          Node ha-652395-m03 has been rebooted, boot id: 457eaafd-d2b7-4b60-9d96-63a08e32737f
	  Normal   NodeReady                56s                kubelet          Node ha-652395-m03 status is now: NodeReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-652395-m03 event: Registered Node ha-652395-m03 in Controller
	
	
	Name:               ha-652395-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T17_47_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:47:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:58:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:58:04 +0000   Fri, 02 Aug 2024 17:58:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:58:04 +0000   Fri, 02 Aug 2024 17:58:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:58:04 +0000   Fri, 02 Aug 2024 17:58:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:58:04 +0000   Fri, 02 Aug 2024 17:58:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-652395-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 998c02abf56b4784b82e5c48780cf7d3
	  System UUID:                998c02ab-f56b-4784-b82e-5c48780cf7d3
	  Boot ID:                    767ab23a-9b64-4543-b04a-3d734b32750a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nksdg       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-d44zn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-652395-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-652395-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-652395-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-652395-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m18s              node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   NodeNotReady             98s                node-controller  Node ha-652395-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           95s                node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   RegisteredNode           31s                node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-652395-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-652395-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-652395-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-652395-m04 has been rebooted, boot id: 767ab23a-9b64-4543-b04a-3d734b32750a
	  Normal   NodeReady                8s                 kubelet          Node ha-652395-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +4.520223] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.851587] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.054661] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055410] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.166920] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.132294] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.235363] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.898825] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +3.781164] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.056602] kauditd_printk_skb: 158 callbacks suppressed
	[Aug 2 17:44] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.095134] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.851149] kauditd_printk_skb: 18 callbacks suppressed
	[ +21.579996] kauditd_printk_skb: 38 callbacks suppressed
	[Aug 2 17:45] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 2 17:55] systemd-fstab-generator[3653]: Ignoring "noauto" option for root device
	[  +0.145049] systemd-fstab-generator[3665]: Ignoring "noauto" option for root device
	[  +0.174682] systemd-fstab-generator[3679]: Ignoring "noauto" option for root device
	[  +0.136281] systemd-fstab-generator[3691]: Ignoring "noauto" option for root device
	[  +0.271418] systemd-fstab-generator[3719]: Ignoring "noauto" option for root device
	[  +0.768336] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +4.304021] kauditd_printk_skb: 223 callbacks suppressed
	[ +38.522867] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82] <==
	{"level":"warn","ts":"2024-08-02T17:57:11.027553Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:57:11.050473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:57:11.127634Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"5a5dd032def1271d","from":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-02T17:57:12.333934Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.62:2380/version","remote-member-id":"254930c0dd0c8ee9","error":"Get \"https://192.168.39.62:2380/version\": dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-02T17:57:12.333973Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"254930c0dd0c8ee9","error":"Get \"https://192.168.39.62:2380/version\": dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-02T17:57:14.222447Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"254930c0dd0c8ee9","rtt":"0s","error":"dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-02T17:57:14.222518Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"254930c0dd0c8ee9","rtt":"0s","error":"dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-02T17:57:16.336495Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.62:2380/version","remote-member-id":"254930c0dd0c8ee9","error":"Get \"https://192.168.39.62:2380/version\": dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-02T17:57:16.336612Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"254930c0dd0c8ee9","error":"Get \"https://192.168.39.62:2380/version\": dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-02T17:57:19.223272Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"254930c0dd0c8ee9","rtt":"0s","error":"dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-02T17:57:19.223393Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"254930c0dd0c8ee9","rtt":"0s","error":"dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-02T17:57:20.338999Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.62:2380/version","remote-member-id":"254930c0dd0c8ee9","error":"Get \"https://192.168.39.62:2380/version\": dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-02T17:57:20.33907Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"254930c0dd0c8ee9","error":"Get \"https://192.168.39.62:2380/version\": dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-02T17:57:23.670641Z","caller":"traceutil/trace.go:171","msg":"trace[1887612350] linearizableReadLoop","detail":"{readStateIndex:2859; appliedIndex:2859; }","duration":"103.649013ms","start":"2024-08-02T17:57:23.566948Z","end":"2024-08-02T17:57:23.670597Z","steps":["trace[1887612350] 'read index received'  (duration: 103.643718ms)","trace[1887612350] 'applied index is now lower than readState.Index'  (duration: 3.908µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T17:57:23.677951Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.909981ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T17:57:23.67828Z","caller":"traceutil/trace.go:171","msg":"trace[534241329] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2470; }","duration":"111.340375ms","start":"2024-08-02T17:57:23.566921Z","end":"2024-08-02T17:57:23.678261Z","steps":["trace[534241329] 'agreement among raft nodes before linearized reading'  (duration: 103.781247ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:57:24.224056Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"254930c0dd0c8ee9","rtt":"0s","error":"dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-02T17:57:24.224099Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"254930c0dd0c8ee9","rtt":"0s","error":"dial tcp 192.168.39.62:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-02T17:57:24.374156Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:57:24.3743Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:57:24.393626Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:57:24.393667Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"5a5dd032def1271d","to":"254930c0dd0c8ee9","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-02T17:57:24.393886Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:57:24.407577Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"5a5dd032def1271d","to":"254930c0dd0c8ee9","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-02T17:57:24.407617Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	
	
	==> etcd [fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1] <==
	{"level":"info","ts":"2024-08-02T17:53:29.078677Z","caller":"traceutil/trace.go:171","msg":"trace[2003454795] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; }","duration":"116.648752ms","start":"2024-08-02T17:53:28.962025Z","end":"2024-08-02T17:53:29.078674Z","steps":["trace[2003454795] 'agreement among raft nodes before linearized reading'  (duration: 109.327583ms)"],"step_count":1}
	2024/08/02 17:53:29 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-08-02T17:53:29.077141Z","caller":"traceutil/trace.go:171","msg":"trace[2114495648] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; }","duration":"706.494712ms","start":"2024-08-02T17:53:28.370639Z","end":"2024-08-02T17:53:29.077133Z","steps":["trace[2114495648] 'agreement among raft nodes before linearized reading'  (duration: 683.586881ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:53:29.078843Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:53:28.370619Z","time spent":"708.2146ms","remote":"127.0.0.1:39664","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	2024/08/02 17:53:29 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-02T17:53:29.107642Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T17:53:29.107729Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-02T17:53:29.107802Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"5a5dd032def1271d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-02T17:53:29.107935Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.107962Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108001Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108122Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108195Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108248Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108271Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108279Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108287Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108304Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108341Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.10838Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108419Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108478Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.110873Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2024-08-02T17:53:29.110968Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2024-08-02T17:53:29.110989Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-652395","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"]}
	
	
	==> kernel <==
	 17:58:13 up 14 min,  0 users,  load average: 0.35, 0.39, 0.26
	Linux ha-652395 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf950c5d12e435f630c4b4c3abcb6a81923d57812df9231b4238094d723c3c5c] <==
	I0802 17:57:34.331715       1 main.go:299] handling current node
	I0802 17:57:44.325825       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:57:44.325970       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:57:44.326131       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:57:44.326157       1 main.go:299] handling current node
	I0802 17:57:44.326179       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:57:44.326195       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:57:44.326261       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:57:44.326279       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:57:54.323541       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:57:54.323588       1 main.go:299] handling current node
	I0802 17:57:54.323618       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:57:54.323626       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:57:54.323789       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:57:54.323878       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:57:54.323984       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:57:54.324010       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:58:04.324228       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:58:04.324364       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:58:04.324601       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:58:04.324751       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:58:04.324869       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:58:04.324905       1 main.go:299] handling current node
	I0802 17:58:04.324930       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:58:04.324947       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1] <==
	I0802 17:53:02.519607       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:53:02.519713       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:53:02.519949       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:53:02.519983       1 main.go:299] handling current node
	I0802 17:53:02.520006       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:53:02.520022       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:53:02.520117       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:53:02.520136       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:53:12.519366       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:53:12.519470       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:53:12.519629       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:53:12.519650       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:53:12.519717       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:53:12.519735       1 main.go:299] handling current node
	I0802 17:53:12.519751       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:53:12.519767       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:53:22.519506       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:53:22.519624       1 main.go:299] handling current node
	I0802 17:53:22.519661       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:53:22.519680       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:53:22.519836       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:53:22.519859       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:53:22.519930       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:53:22.519948       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	E0802 17:53:27.550673       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d] <==
	I0802 17:55:04.019665       1 options.go:221] external host was not specified, using 192.168.39.210
	I0802 17:55:04.022291       1 server.go:148] Version: v1.30.3
	I0802 17:55:04.022507       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:55:04.648065       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0802 17:55:04.648246       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0802 17:55:04.648847       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0802 17:55:04.648872       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0802 17:55:04.649053       1 instance.go:299] Using reconciler: lease
	W0802 17:55:24.640082       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0802 17:55:24.640082       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0802 17:55:24.650301       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [d3d0b5e102e9fbadea3e3c0104ad4c5398e9b7b7c25600a93f4dd759b6b425a1] <==
	I0802 17:55:58.690627       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0802 17:55:58.690914       1 available_controller.go:423] Starting AvailableConditionController
	I0802 17:55:58.690939       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0802 17:55:58.690995       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0802 17:55:58.691219       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0802 17:55:58.690001       1 establishing_controller.go:76] Starting EstablishingController
	I0802 17:55:58.690081       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0802 17:55:58.788979       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0802 17:55:58.789071       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0802 17:55:58.789300       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0802 17:55:58.790126       1 shared_informer.go:320] Caches are synced for configmaps
	I0802 17:55:58.790238       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0802 17:55:58.790286       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 17:55:58.790764       1 aggregator.go:165] initial CRD sync complete...
	I0802 17:55:58.790878       1 autoregister_controller.go:141] Starting autoregister controller
	I0802 17:55:58.791273       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0802 17:55:58.791313       1 cache.go:39] Caches are synced for autoregister controller
	I0802 17:55:58.791364       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0802 17:55:58.792191       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 17:55:58.796239       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0802 17:55:58.802488       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0802 17:55:58.802540       1 policy_source.go:224] refreshing policies
	I0802 17:55:58.826997       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 17:55:59.701537       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 17:56:32.375340       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9] <==
	I0802 17:55:45.335528       1 serving.go:380] Generated self-signed cert in-memory
	I0802 17:55:45.624045       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0802 17:55:45.624126       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:55:45.625686       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0802 17:55:45.626142       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0802 17:55:45.626848       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0802 17:55:45.627956       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0802 17:55:55.631823       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.210:8443/healthz\": dial tcp 192.168.39.210:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f7d6d025dc3e8c1458f36dd96ba3669dda736544c57e2651dd182db499a629be] <==
	I0802 17:56:37.224489       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0802 17:56:37.229325       1 shared_informer.go:320] Caches are synced for daemon sets
	I0802 17:56:37.234370       1 shared_informer.go:320] Caches are synced for taint
	I0802 17:56:37.234517       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0802 17:56:37.234625       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-652395"
	I0802 17:56:37.234678       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-652395-m02"
	I0802 17:56:37.234808       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-652395-m03"
	I0802 17:56:37.234922       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-652395-m04"
	I0802 17:56:37.235105       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0802 17:56:37.238083       1 shared_informer.go:320] Caches are synced for persistent volume
	I0802 17:56:37.244047       1 shared_informer.go:320] Caches are synced for GC
	I0802 17:56:37.246802       1 shared_informer.go:320] Caches are synced for HPA
	I0802 17:56:37.290027       1 shared_informer.go:320] Caches are synced for TTL
	I0802 17:56:37.298773       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0802 17:56:37.690030       1 shared_informer.go:320] Caches are synced for garbage collector
	I0802 17:56:37.690071       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0802 17:56:37.697351       1 shared_informer.go:320] Caches are synced for garbage collector
	I0802 17:56:43.018327       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-s4xs5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-s4xs5\": the object has been modified; please apply your changes to the latest version and try again"
	I0802 17:56:43.018692       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d8a9b68e-d7d1-47c2-9626-10228ae00074", APIVersion:"v1", ResourceVersion:"296", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-s4xs5 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-s4xs5": the object has been modified; please apply your changes to the latest version and try again
	I0802 17:56:43.041339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.541104ms"
	I0802 17:56:43.041487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="97.557µs"
	I0802 17:57:17.540511       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.705µs"
	I0802 17:57:35.779604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.699028ms"
	I0802 17:57:35.779896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.139µs"
	I0802 17:58:04.180853       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-652395-m04"
	
	
	==> kube-proxy [a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4] <==
	E0802 17:55:27.779969       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-652395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0802 17:55:46.211478       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-652395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0802 17:55:46.211541       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0802 17:55:46.244511       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 17:55:46.244576       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 17:55:46.244594       1 server_linux.go:165] "Using iptables Proxier"
	I0802 17:55:46.246825       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 17:55:46.247116       1 server.go:872] "Version info" version="v1.30.3"
	I0802 17:55:46.247143       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:55:46.248541       1 config.go:192] "Starting service config controller"
	I0802 17:55:46.248580       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 17:55:46.248608       1 config.go:101] "Starting endpoint slice config controller"
	I0802 17:55:46.248623       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 17:55:46.249335       1 config.go:319] "Starting node config controller"
	I0802 17:55:46.249358       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0802 17:55:49.284056       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0802 17:55:49.284317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:55:49.284537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:55:49.284620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:55:49.284696       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:55:49.284744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:55:49.284831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0802 17:55:50.149739       1 shared_informer.go:320] Caches are synced for service config
	I0802 17:55:50.250402       1 shared_informer.go:320] Caches are synced for node config
	I0802 17:55:50.649680       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d] <==
	E0802 17:52:15.267607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:18.338782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:18.338898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:18.338783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:18.338977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:18.338849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:18.339047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:26.531757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:26.532974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:26.532370       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:26.533096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:26.533228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:26.533326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:35.747616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:35.747708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:38.819471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:38.819587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:38.819713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:38.819769       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:57.251113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:57.251315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:57.251512       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:57.251607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:53:00.324137       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:53:00.324177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93] <==
	W0802 17:55:42.756064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.210:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:42.756114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.210:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:43.600273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.210:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:43.600332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.210:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:44.282505       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.210:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:44.282577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.210:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:45.149064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.210:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:45.149126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.210:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:45.892319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.210:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:45.892395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.210:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:45.984703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.210:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:45.984839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.210:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:46.080723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.210:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:46.080816       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.210:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:46.166917       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.210:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:46.166956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.210:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:46.218732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.210:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:46.218817       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.210:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:47.088650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.210:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:47.088770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.210:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:58.714873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 17:55:58.715064       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 17:55:58.715268       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 17:55:58.715358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0802 17:56:06.265643       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca] <==
	W0802 17:53:21.976372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 17:53:21.976582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 17:53:22.157002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 17:53:22.157099       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 17:53:22.162331       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 17:53:22.162416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 17:53:22.682516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 17:53:22.682605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0802 17:53:22.712918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 17:53:22.712959       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 17:53:22.908843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 17:53:22.908992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 17:53:22.923265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 17:53:22.923360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 17:53:23.151794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 17:53:23.151868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0802 17:53:23.461038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 17:53:23.461080       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 17:53:24.539079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 17:53:24.539160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 17:53:27.622409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 17:53:27.622509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0802 17:53:28.888666       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 17:53:28.888720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 17:53:29.034582       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 02 17:56:02 ha-652395 kubelet[1358]: I0802 17:56:02.874664    1358 scope.go:117] "RemoveContainer" containerID="b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9"
	Aug 02 17:56:02 ha-652395 kubelet[1358]: E0802 17:56:02.875284    1358 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-652395_kube-system(b35503df9ee27b31247351a3b8b83f9c)\"" pod="kube-system/kube-controller-manager-ha-652395" podUID="b35503df9ee27b31247351a3b8b83f9c"
	Aug 02 17:56:04 ha-652395 kubelet[1358]: E0802 17:56:04.856965    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:56:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:56:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:56:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:56:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:56:13 ha-652395 kubelet[1358]: I0802 17:56:13.845879    1358 scope.go:117] "RemoveContainer" containerID="b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9"
	Aug 02 17:56:13 ha-652395 kubelet[1358]: E0802 17:56:13.846239    1358 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-652395_kube-system(b35503df9ee27b31247351a3b8b83f9c)\"" pod="kube-system/kube-controller-manager-ha-652395" podUID="b35503df9ee27b31247351a3b8b83f9c"
	Aug 02 17:56:14 ha-652395 kubelet[1358]: I0802 17:56:14.845417    1358 scope.go:117] "RemoveContainer" containerID="9e125f9f2e129e9b4cdf81d79c93193ed41662eab1d95610accfb7b8b24d88a5"
	Aug 02 17:56:24 ha-652395 kubelet[1358]: I0802 17:56:24.847091    1358 scope.go:117] "RemoveContainer" containerID="b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9"
	Aug 02 17:56:45 ha-652395 kubelet[1358]: I0802 17:56:45.845323    1358 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-652395" podUID="1ee810a9-9d93-4cff-a5bb-60bab005eb5c"
	Aug 02 17:56:45 ha-652395 kubelet[1358]: I0802 17:56:45.863900    1358 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-652395"
	Aug 02 17:56:46 ha-652395 kubelet[1358]: I0802 17:56:46.685108    1358 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-652395" podUID="1ee810a9-9d93-4cff-a5bb-60bab005eb5c"
	Aug 02 17:56:54 ha-652395 kubelet[1358]: I0802 17:56:54.863413    1358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-652395" podStartSLOduration=9.863367292 podStartE2EDuration="9.863367292s" podCreationTimestamp="2024-08-02 17:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-02 17:56:54.863211047 +0000 UTC m=+770.173561401" watchObservedRunningTime="2024-08-02 17:56:54.863367292 +0000 UTC m=+770.173717648"
	Aug 02 17:57:04 ha-652395 kubelet[1358]: E0802 17:57:04.857837    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:57:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:57:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:57:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:57:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:58:04 ha-652395 kubelet[1358]: E0802 17:58:04.858144    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:58:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:58:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:58:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:58:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 17:58:11.976900   31081 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19355-5397/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-652395 -n ha-652395
helpers_test.go:261: (dbg) Run:  kubectl --context ha-652395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (407.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 stop -v=7 --alsologtostderr
E0802 18:00:14.261332   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652395 stop -v=7 --alsologtostderr: exit status 82 (2m0.463380003s)

                                                
                                                
-- stdout --
	* Stopping node "ha-652395-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:58:31.581366   31499 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:58:31.581492   31499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:58:31.581501   31499 out.go:304] Setting ErrFile to fd 2...
	I0802 17:58:31.581508   31499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:58:31.581715   31499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:58:31.581947   31499 out.go:298] Setting JSON to false
	I0802 17:58:31.582022   31499 mustload.go:65] Loading cluster: ha-652395
	I0802 17:58:31.582362   31499 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:58:31.582445   31499 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:58:31.582624   31499 mustload.go:65] Loading cluster: ha-652395
	I0802 17:58:31.582754   31499 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:58:31.582786   31499 stop.go:39] StopHost: ha-652395-m04
	I0802 17:58:31.583209   31499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:58:31.583255   31499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:58:31.598456   31499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37789
	I0802 17:58:31.598959   31499 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:58:31.599529   31499 main.go:141] libmachine: Using API Version  1
	I0802 17:58:31.599550   31499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:58:31.599872   31499 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:58:31.602297   31499 out.go:177] * Stopping node "ha-652395-m04"  ...
	I0802 17:58:31.603558   31499 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0802 17:58:31.603602   31499 main.go:141] libmachine: (ha-652395-m04) Calling .DriverName
	I0802 17:58:31.603835   31499 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0802 17:58:31.603862   31499 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHHostname
	I0802 17:58:31.606624   31499 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:58:31.607014   31499 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:57:58 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 17:58:31.607045   31499 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 17:58:31.607245   31499 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHPort
	I0802 17:58:31.607415   31499 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHKeyPath
	I0802 17:58:31.607581   31499 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHUsername
	I0802 17:58:31.607714   31499 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m04/id_rsa Username:docker}
	I0802 17:58:31.685030   31499 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0802 17:58:31.736689   31499 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0802 17:58:31.788610   31499 main.go:141] libmachine: Stopping "ha-652395-m04"...
	I0802 17:58:31.788641   31499 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 17:58:31.790315   31499 main.go:141] libmachine: (ha-652395-m04) Calling .Stop
	I0802 17:58:31.793991   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 0/120
	I0802 17:58:32.795976   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 1/120
	I0802 17:58:33.797666   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 2/120
	I0802 17:58:34.799293   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 3/120
	I0802 17:58:35.801612   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 4/120
	I0802 17:58:36.803554   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 5/120
	I0802 17:58:37.805727   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 6/120
	I0802 17:58:38.806993   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 7/120
	I0802 17:58:39.809268   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 8/120
	I0802 17:58:40.810561   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 9/120
	I0802 17:58:41.812614   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 10/120
	I0802 17:58:42.814025   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 11/120
	I0802 17:58:43.816178   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 12/120
	I0802 17:58:44.817554   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 13/120
	I0802 17:58:45.818791   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 14/120
	I0802 17:58:46.820785   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 15/120
	I0802 17:58:47.822914   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 16/120
	I0802 17:58:48.824204   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 17/120
	I0802 17:58:49.825518   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 18/120
	I0802 17:58:50.827265   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 19/120
	I0802 17:58:51.829434   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 20/120
	I0802 17:58:52.831204   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 21/120
	I0802 17:58:53.832868   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 22/120
	I0802 17:58:54.834272   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 23/120
	I0802 17:58:55.835585   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 24/120
	I0802 17:58:56.837254   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 25/120
	I0802 17:58:57.838531   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 26/120
	I0802 17:58:58.839819   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 27/120
	I0802 17:58:59.841373   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 28/120
	I0802 17:59:00.843131   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 29/120
	I0802 17:59:01.845302   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 30/120
	I0802 17:59:02.846713   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 31/120
	I0802 17:59:03.848088   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 32/120
	I0802 17:59:04.849497   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 33/120
	I0802 17:59:05.851950   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 34/120
	I0802 17:59:06.854296   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 35/120
	I0802 17:59:07.855724   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 36/120
	I0802 17:59:08.857623   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 37/120
	I0802 17:59:09.858956   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 38/120
	I0802 17:59:10.860713   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 39/120
	I0802 17:59:11.862517   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 40/120
	I0802 17:59:12.863905   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 41/120
	I0802 17:59:13.865550   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 42/120
	I0802 17:59:14.866805   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 43/120
	I0802 17:59:15.868255   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 44/120
	I0802 17:59:16.870126   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 45/120
	I0802 17:59:17.872357   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 46/120
	I0802 17:59:18.874493   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 47/120
	I0802 17:59:19.876033   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 48/120
	I0802 17:59:20.877422   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 49/120
	I0802 17:59:21.879449   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 50/120
	I0802 17:59:22.881906   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 51/120
	I0802 17:59:23.883200   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 52/120
	I0802 17:59:24.884739   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 53/120
	I0802 17:59:25.886078   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 54/120
	I0802 17:59:26.888059   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 55/120
	I0802 17:59:27.889586   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 56/120
	I0802 17:59:28.891441   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 57/120
	I0802 17:59:29.893713   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 58/120
	I0802 17:59:30.895890   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 59/120
	I0802 17:59:31.897768   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 60/120
	I0802 17:59:32.899195   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 61/120
	I0802 17:59:33.900374   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 62/120
	I0802 17:59:34.901709   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 63/120
	I0802 17:59:35.903211   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 64/120
	I0802 17:59:36.905283   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 65/120
	I0802 17:59:37.906583   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 66/120
	I0802 17:59:38.908096   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 67/120
	I0802 17:59:39.909469   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 68/120
	I0802 17:59:40.910979   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 69/120
	I0802 17:59:41.913149   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 70/120
	I0802 17:59:42.914557   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 71/120
	I0802 17:59:43.916017   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 72/120
	I0802 17:59:44.917448   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 73/120
	I0802 17:59:45.919571   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 74/120
	I0802 17:59:46.921693   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 75/120
	I0802 17:59:47.923089   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 76/120
	I0802 17:59:48.924452   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 77/120
	I0802 17:59:49.925815   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 78/120
	I0802 17:59:50.927475   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 79/120
	I0802 17:59:51.929995   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 80/120
	I0802 17:59:52.931589   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 81/120
	I0802 17:59:53.932971   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 82/120
	I0802 17:59:54.934533   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 83/120
	I0802 17:59:55.935819   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 84/120
	I0802 17:59:56.937112   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 85/120
	I0802 17:59:57.938497   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 86/120
	I0802 17:59:58.939947   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 87/120
	I0802 17:59:59.941544   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 88/120
	I0802 18:00:00.943156   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 89/120
	I0802 18:00:01.945289   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 90/120
	I0802 18:00:02.946539   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 91/120
	I0802 18:00:03.947918   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 92/120
	I0802 18:00:04.949280   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 93/120
	I0802 18:00:05.950630   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 94/120
	I0802 18:00:06.952711   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 95/120
	I0802 18:00:07.955089   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 96/120
	I0802 18:00:08.956534   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 97/120
	I0802 18:00:09.957821   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 98/120
	I0802 18:00:10.959327   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 99/120
	I0802 18:00:11.961701   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 100/120
	I0802 18:00:12.963318   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 101/120
	I0802 18:00:13.964789   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 102/120
	I0802 18:00:14.966196   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 103/120
	I0802 18:00:15.968225   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 104/120
	I0802 18:00:16.970137   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 105/120
	I0802 18:00:17.971419   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 106/120
	I0802 18:00:18.973758   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 107/120
	I0802 18:00:19.975238   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 108/120
	I0802 18:00:20.977587   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 109/120
	I0802 18:00:21.979507   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 110/120
	I0802 18:00:22.981126   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 111/120
	I0802 18:00:23.983362   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 112/120
	I0802 18:00:24.985753   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 113/120
	I0802 18:00:25.987026   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 114/120
	I0802 18:00:26.988864   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 115/120
	I0802 18:00:27.990416   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 116/120
	I0802 18:00:28.992213   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 117/120
	I0802 18:00:29.993903   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 118/120
	I0802 18:00:30.995296   31499 main.go:141] libmachine: (ha-652395-m04) Waiting for machine to stop 119/120
	I0802 18:00:31.996193   31499 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0802 18:00:31.996244   31499 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0802 18:00:31.997907   31499 out.go:177] 
	W0802 18:00:31.999245   31499 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0802 18:00:31.999259   31499 out.go:239] * 
	* 
	W0802 18:00:32.001634   31499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:00:32.003068   31499 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-652395 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr: exit status 3 (18.838939095s)

                                                
                                                
-- stdout --
	ha-652395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-652395-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:00:32.045001   31949 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:00:32.045269   31949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:00:32.045279   31949 out.go:304] Setting ErrFile to fd 2...
	I0802 18:00:32.045283   31949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:00:32.045445   31949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:00:32.045607   31949 out.go:298] Setting JSON to false
	I0802 18:00:32.045631   31949 mustload.go:65] Loading cluster: ha-652395
	I0802 18:00:32.045671   31949 notify.go:220] Checking for updates...
	I0802 18:00:32.046047   31949 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:00:32.046065   31949 status.go:255] checking status of ha-652395 ...
	I0802 18:00:32.046425   31949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:00:32.046482   31949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:00:32.066787   31949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I0802 18:00:32.067338   31949 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:00:32.067889   31949 main.go:141] libmachine: Using API Version  1
	I0802 18:00:32.067910   31949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:00:32.068231   31949 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:00:32.068440   31949 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 18:00:32.069975   31949 status.go:330] ha-652395 host status = "Running" (err=<nil>)
	I0802 18:00:32.069991   31949 host.go:66] Checking if "ha-652395" exists ...
	I0802 18:00:32.070271   31949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:00:32.070312   31949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:00:32.085447   31949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0802 18:00:32.085806   31949 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:00:32.086256   31949 main.go:141] libmachine: Using API Version  1
	I0802 18:00:32.086278   31949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:00:32.086582   31949 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:00:32.086789   31949 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 18:00:32.089603   31949 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 18:00:32.090043   31949 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 18:00:32.090068   31949 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 18:00:32.090202   31949 host.go:66] Checking if "ha-652395" exists ...
	I0802 18:00:32.090599   31949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:00:32.090645   31949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:00:32.104969   31949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33899
	I0802 18:00:32.105416   31949 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:00:32.105850   31949 main.go:141] libmachine: Using API Version  1
	I0802 18:00:32.105870   31949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:00:32.106151   31949 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:00:32.106351   31949 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 18:00:32.106532   31949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 18:00:32.106555   31949 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 18:00:32.109028   31949 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 18:00:32.109415   31949 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 18:00:32.109439   31949 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 18:00:32.109557   31949 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 18:00:32.109781   31949 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 18:00:32.109934   31949 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 18:00:32.110059   31949 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 18:00:32.196217   31949 ssh_runner.go:195] Run: systemctl --version
	I0802 18:00:32.202779   31949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:00:32.218589   31949 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 18:00:32.218613   31949 api_server.go:166] Checking apiserver status ...
	I0802 18:00:32.218644   31949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:00:32.236501   31949 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5156/cgroup
	W0802 18:00:32.245676   31949 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5156/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 18:00:32.245718   31949 ssh_runner.go:195] Run: ls
	I0802 18:00:32.249805   31949 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 18:00:32.254178   31949 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 18:00:32.254200   31949 status.go:422] ha-652395 apiserver status = Running (err=<nil>)
	I0802 18:00:32.254229   31949 status.go:257] ha-652395 status: &{Name:ha-652395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 18:00:32.254255   31949 status.go:255] checking status of ha-652395-m02 ...
	I0802 18:00:32.254578   31949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:00:32.254635   31949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:00:32.269709   31949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42161
	I0802 18:00:32.270087   31949 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:00:32.270578   31949 main.go:141] libmachine: Using API Version  1
	I0802 18:00:32.270621   31949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:00:32.270933   31949 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:00:32.271165   31949 main.go:141] libmachine: (ha-652395-m02) Calling .GetState
	I0802 18:00:32.272658   31949 status.go:330] ha-652395-m02 host status = "Running" (err=<nil>)
	I0802 18:00:32.272675   31949 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 18:00:32.272950   31949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:00:32.272981   31949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:00:32.287733   31949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40551
	I0802 18:00:32.288156   31949 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:00:32.288545   31949 main.go:141] libmachine: Using API Version  1
	I0802 18:00:32.288566   31949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:00:32.288800   31949 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:00:32.288959   31949 main.go:141] libmachine: (ha-652395-m02) Calling .GetIP
	I0802 18:00:32.292044   31949 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 18:00:32.292488   31949 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:55:14 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 18:00:32.292521   31949 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 18:00:32.292681   31949 host.go:66] Checking if "ha-652395-m02" exists ...
	I0802 18:00:32.292973   31949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:00:32.293014   31949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:00:32.307608   31949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I0802 18:00:32.307967   31949 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:00:32.308393   31949 main.go:141] libmachine: Using API Version  1
	I0802 18:00:32.308416   31949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:00:32.308701   31949 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:00:32.308881   31949 main.go:141] libmachine: (ha-652395-m02) Calling .DriverName
	I0802 18:00:32.309041   31949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 18:00:32.309055   31949 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHHostname
	I0802 18:00:32.311761   31949 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 18:00:32.312146   31949 main.go:141] libmachine: (ha-652395-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d8:1e", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:55:14 +0000 UTC Type:0 Mac:52:54:00:da:d8:1e Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-652395-m02 Clientid:01:52:54:00:da:d8:1e}
	I0802 18:00:32.312183   31949 main.go:141] libmachine: (ha-652395-m02) DBG | domain ha-652395-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:da:d8:1e in network mk-ha-652395
	I0802 18:00:32.312316   31949 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHPort
	I0802 18:00:32.312497   31949 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHKeyPath
	I0802 18:00:32.312643   31949 main.go:141] libmachine: (ha-652395-m02) Calling .GetSSHUsername
	I0802 18:00:32.312775   31949 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m02/id_rsa Username:docker}
	I0802 18:00:32.399694   31949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:00:32.416709   31949 kubeconfig.go:125] found "ha-652395" server: "https://192.168.39.254:8443"
	I0802 18:00:32.416733   31949 api_server.go:166] Checking apiserver status ...
	I0802 18:00:32.416775   31949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:00:32.434520   31949 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup
	W0802 18:00:32.443793   31949 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 18:00:32.443855   31949 ssh_runner.go:195] Run: ls
	I0802 18:00:32.447572   31949 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0802 18:00:32.451487   31949 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0802 18:00:32.451506   31949 status.go:422] ha-652395-m02 apiserver status = Running (err=<nil>)
	I0802 18:00:32.451515   31949 status.go:257] ha-652395-m02 status: &{Name:ha-652395-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 18:00:32.451528   31949 status.go:255] checking status of ha-652395-m04 ...
	I0802 18:00:32.451811   31949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:00:32.451841   31949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:00:32.466804   31949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33461
	I0802 18:00:32.467187   31949 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:00:32.467631   31949 main.go:141] libmachine: Using API Version  1
	I0802 18:00:32.467650   31949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:00:32.467955   31949 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:00:32.468126   31949 main.go:141] libmachine: (ha-652395-m04) Calling .GetState
	I0802 18:00:32.469707   31949 status.go:330] ha-652395-m04 host status = "Running" (err=<nil>)
	I0802 18:00:32.469722   31949 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 18:00:32.470101   31949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:00:32.470142   31949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:00:32.485095   31949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45363
	I0802 18:00:32.485505   31949 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:00:32.486013   31949 main.go:141] libmachine: Using API Version  1
	I0802 18:00:32.486031   31949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:00:32.486384   31949 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:00:32.486557   31949 main.go:141] libmachine: (ha-652395-m04) Calling .GetIP
	I0802 18:00:32.489279   31949 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 18:00:32.489782   31949 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:57:58 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 18:00:32.489810   31949 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 18:00:32.489960   31949 host.go:66] Checking if "ha-652395-m04" exists ...
	I0802 18:00:32.490253   31949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:00:32.490297   31949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:00:32.504939   31949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41723
	I0802 18:00:32.505350   31949 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:00:32.505831   31949 main.go:141] libmachine: Using API Version  1
	I0802 18:00:32.505858   31949 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:00:32.506156   31949 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:00:32.506343   31949 main.go:141] libmachine: (ha-652395-m04) Calling .DriverName
	I0802 18:00:32.506562   31949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 18:00:32.506591   31949 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHHostname
	I0802 18:00:32.509150   31949 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 18:00:32.509688   31949 main.go:141] libmachine: (ha-652395-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:40:46", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:57:58 +0000 UTC Type:0 Mac:52:54:00:c0:40:46 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-652395-m04 Clientid:01:52:54:00:c0:40:46}
	I0802 18:00:32.509717   31949 main.go:141] libmachine: (ha-652395-m04) DBG | domain ha-652395-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:c0:40:46 in network mk-ha-652395
	I0802 18:00:32.509887   31949 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHPort
	I0802 18:00:32.510062   31949 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHKeyPath
	I0802 18:00:32.510253   31949 main.go:141] libmachine: (ha-652395-m04) Calling .GetSSHUsername
	I0802 18:00:32.510459   31949 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395-m04/id_rsa Username:docker}
	W0802 18:00:50.843359   31949 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.222:22: connect: no route to host
	W0802 18:00:50.843470   31949 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	E0802 18:00:50.843494   31949 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host
	I0802 18:00:50.843507   31949 status.go:257] ha-652395-m04 status: &{Name:ha-652395-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0802 18:00:50.843540   31949 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.222:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-652395 -n ha-652395
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-652395 logs -n 25: (1.540062372s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-652395 ssh -n ha-652395-m02 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m03_ha-652395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04:/home/docker/cp-test_ha-652395-m03_ha-652395-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m04 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m03_ha-652395-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp testdata/cp-test.txt                                                | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile2210744680/001/cp-test_ha-652395-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395:/home/docker/cp-test_ha-652395-m04_ha-652395.txt                       |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395 sudo cat                                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395.txt                                 |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m02:/home/docker/cp-test_ha-652395-m04_ha-652395-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m02 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m03:/home/docker/cp-test_ha-652395-m04_ha-652395-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n                                                                 | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | ha-652395-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-652395 ssh -n ha-652395-m03 sudo cat                                          | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC | 02 Aug 24 17:48 UTC |
	|         | /home/docker/cp-test_ha-652395-m04_ha-652395-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-652395 node stop m02 -v=7                                                     | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-652395 node start m02 -v=7                                                    | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-652395 -v=7                                                           | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-652395 -v=7                                                                | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-652395 --wait=true -v=7                                                    | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:53 UTC | 02 Aug 24 17:58 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-652395                                                                | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:58 UTC |                     |
	| node    | ha-652395 node delete m03 -v=7                                                   | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:58 UTC | 02 Aug 24 17:58 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-652395 stop -v=7                                                              | ha-652395 | jenkins | v1.33.1 | 02 Aug 24 17:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:53:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:53:27.981884   29606 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:53:27.982006   29606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:53:27.982015   29606 out.go:304] Setting ErrFile to fd 2...
	I0802 17:53:27.982019   29606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:53:27.982188   29606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:53:27.982706   29606 out.go:298] Setting JSON to false
	I0802 17:53:27.983601   29606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2152,"bootTime":1722619056,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:53:27.983658   29606 start.go:139] virtualization: kvm guest
	I0802 17:53:27.985819   29606 out.go:177] * [ha-652395] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:53:27.987274   29606 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 17:53:27.987328   29606 notify.go:220] Checking for updates...
	I0802 17:53:27.989379   29606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:53:27.990537   29606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:53:27.991673   29606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:53:27.992821   29606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 17:53:27.994166   29606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 17:53:27.995890   29606 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:53:27.996047   29606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:53:27.996654   29606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:53:27.996708   29606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:53:28.012870   29606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43729
	I0802 17:53:28.013340   29606 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:53:28.013941   29606 main.go:141] libmachine: Using API Version  1
	I0802 17:53:28.013960   29606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:53:28.014308   29606 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:53:28.014484   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:53:28.050139   29606 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 17:53:28.051480   29606 start.go:297] selected driver: kvm2
	I0802 17:53:28.051495   29606 start.go:901] validating driver "kvm2" against &{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.222 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:53:28.051674   29606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 17:53:28.052111   29606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:53:28.052208   29606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:53:28.066763   29606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:53:28.067695   29606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 17:53:28.067729   29606 cni.go:84] Creating CNI manager for ""
	I0802 17:53:28.067739   29606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0802 17:53:28.067804   29606 start.go:340] cluster config:
	{Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.222 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:53:28.067947   29606 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:53:28.069826   29606 out.go:177] * Starting "ha-652395" primary control-plane node in "ha-652395" cluster
	I0802 17:53:28.071228   29606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:53:28.071264   29606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 17:53:28.071270   29606 cache.go:56] Caching tarball of preloaded images
	I0802 17:53:28.071349   29606 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 17:53:28.071359   29606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 17:53:28.071485   29606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/config.json ...
	I0802 17:53:28.071681   29606 start.go:360] acquireMachinesLock for ha-652395: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 17:53:28.071722   29606 start.go:364] duration metric: took 23.255µs to acquireMachinesLock for "ha-652395"
	I0802 17:53:28.071736   29606 start.go:96] Skipping create...Using existing machine configuration
	I0802 17:53:28.071744   29606 fix.go:54] fixHost starting: 
	I0802 17:53:28.072091   29606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:53:28.072128   29606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:53:28.086944   29606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41307
	I0802 17:53:28.087407   29606 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:53:28.087856   29606 main.go:141] libmachine: Using API Version  1
	I0802 17:53:28.087882   29606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:53:28.088378   29606 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:53:28.088611   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:53:28.088803   29606 main.go:141] libmachine: (ha-652395) Calling .GetState
	I0802 17:53:28.090381   29606 fix.go:112] recreateIfNeeded on ha-652395: state=Running err=<nil>
	W0802 17:53:28.090397   29606 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 17:53:28.092460   29606 out.go:177] * Updating the running kvm2 "ha-652395" VM ...
	I0802 17:53:28.093857   29606 machine.go:94] provisionDockerMachine start ...
	I0802 17:53:28.093878   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:53:28.094078   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.096238   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.096670   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.096695   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.096819   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:53:28.096985   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.097131   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.097269   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:53:28.097446   29606 main.go:141] libmachine: Using SSH client type: native
	I0802 17:53:28.097645   29606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:53:28.097657   29606 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 17:53:28.208249   29606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-652395
	
	I0802 17:53:28.208283   29606 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:53:28.208541   29606 buildroot.go:166] provisioning hostname "ha-652395"
	I0802 17:53:28.208567   29606 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:53:28.208746   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.211460   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.211892   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.211926   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.212006   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:53:28.212226   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.212388   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.212557   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:53:28.212708   29606 main.go:141] libmachine: Using SSH client type: native
	I0802 17:53:28.212916   29606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:53:28.212936   29606 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-652395 && echo "ha-652395" | sudo tee /etc/hostname
	I0802 17:53:28.334100   29606 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-652395
	
	I0802 17:53:28.334125   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.336905   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.337255   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.337286   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.337483   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:53:28.337676   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.337832   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.337978   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:53:28.338129   29606 main.go:141] libmachine: Using SSH client type: native
	I0802 17:53:28.338293   29606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:53:28.338306   29606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-652395' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-652395/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-652395' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 17:53:28.456091   29606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 17:53:28.456128   29606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 17:53:28.456158   29606 buildroot.go:174] setting up certificates
	I0802 17:53:28.456167   29606 provision.go:84] configureAuth start
	I0802 17:53:28.456176   29606 main.go:141] libmachine: (ha-652395) Calling .GetMachineName
	I0802 17:53:28.456476   29606 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:53:28.459480   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.459901   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.459942   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.460045   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.462353   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.462722   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.462748   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.462894   29606 provision.go:143] copyHostCerts
	I0802 17:53:28.462934   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:53:28.462977   29606 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 17:53:28.462986   29606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 17:53:28.463062   29606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 17:53:28.463183   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:53:28.463205   29606 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 17:53:28.463210   29606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 17:53:28.463240   29606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 17:53:28.463360   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:53:28.463380   29606 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 17:53:28.463384   29606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 17:53:28.463411   29606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 17:53:28.463486   29606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.ha-652395 san=[127.0.0.1 192.168.39.210 ha-652395 localhost minikube]
	I0802 17:53:28.736655   29606 provision.go:177] copyRemoteCerts
	I0802 17:53:28.736713   29606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 17:53:28.736735   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.739291   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.739665   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.739695   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.739943   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:53:28.740145   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.740290   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:53:28.740431   29606 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:53:28.827264   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0802 17:53:28.827360   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0802 17:53:28.854459   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0802 17:53:28.854527   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 17:53:28.881733   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0802 17:53:28.881799   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 17:53:28.914763   29606 provision.go:87] duration metric: took 458.575876ms to configureAuth
	I0802 17:53:28.914800   29606 buildroot.go:189] setting minikube options for container-runtime
	I0802 17:53:28.915004   29606 config.go:182] Loaded profile config "ha-652395": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:53:28.915078   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:53:28.917915   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.918350   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:53:28.918376   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:53:28.918566   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:53:28.918792   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.918978   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:53:28.919153   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:53:28.919330   29606 main.go:141] libmachine: Using SSH client type: native
	I0802 17:53:28.919572   29606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:53:28.919604   29606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 17:54:59.845145   29606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 17:54:59.845178   29606 machine.go:97] duration metric: took 1m31.751305873s to provisionDockerMachine
	I0802 17:54:59.845189   29606 start.go:293] postStartSetup for "ha-652395" (driver="kvm2")
	I0802 17:54:59.845201   29606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 17:54:59.845216   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:54:59.845526   29606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 17:54:59.845552   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:54:59.848564   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:54:59.848960   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:54:59.848982   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:54:59.849158   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:54:59.849340   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:54:59.849513   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:54:59.849618   29606 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:54:59.935254   29606 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 17:54:59.939326   29606 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 17:54:59.939361   29606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 17:54:59.939453   29606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 17:54:59.939533   29606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 17:54:59.939543   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /etc/ssl/certs/125472.pem
	I0802 17:54:59.939619   29606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 17:54:59.948795   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:54:59.971233   29606 start.go:296] duration metric: took 126.028004ms for postStartSetup
	I0802 17:54:59.971279   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:54:59.971560   29606 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0802 17:54:59.971586   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:54:59.974208   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:54:59.974563   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:54:59.974593   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:54:59.974731   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:54:59.974901   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:54:59.975057   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:54:59.975208   29606 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	W0802 17:55:00.057384   29606 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0802 17:55:00.057413   29606 fix.go:56] duration metric: took 1m31.985669191s for fixHost
	I0802 17:55:00.057482   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:55:00.059946   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.060261   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:55:00.060293   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.060387   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:55:00.060564   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:55:00.060733   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:55:00.060851   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:55:00.061015   29606 main.go:141] libmachine: Using SSH client type: native
	I0802 17:55:00.061204   29606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.210 22 <nil> <nil>}
	I0802 17:55:00.061217   29606 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 17:55:00.172476   29606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722621300.128893529
	
	I0802 17:55:00.172499   29606 fix.go:216] guest clock: 1722621300.128893529
	I0802 17:55:00.172509   29606 fix.go:229] Guest: 2024-08-02 17:55:00.128893529 +0000 UTC Remote: 2024-08-02 17:55:00.057431605 +0000 UTC m=+92.108375435 (delta=71.461924ms)
	I0802 17:55:00.172556   29606 fix.go:200] guest clock delta is within tolerance: 71.461924ms
	I0802 17:55:00.172561   29606 start.go:83] releasing machines lock for "ha-652395", held for 1m32.100830528s
	I0802 17:55:00.172583   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:55:00.172850   29606 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:55:00.175735   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.176184   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:55:00.176212   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.176411   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:55:00.176842   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:55:00.177008   29606 main.go:141] libmachine: (ha-652395) Calling .DriverName
	I0802 17:55:00.177107   29606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 17:55:00.177146   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:55:00.177199   29606 ssh_runner.go:195] Run: cat /version.json
	I0802 17:55:00.177219   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHHostname
	I0802 17:55:00.179870   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.180213   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.180269   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:55:00.180293   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.180455   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:55:00.180584   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:55:00.180613   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:00.180638   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:55:00.180762   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHPort
	I0802 17:55:00.180825   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:55:00.180932   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHKeyPath
	I0802 17:55:00.181057   29606 main.go:141] libmachine: (ha-652395) Calling .GetSSHUsername
	I0802 17:55:00.181107   29606 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:55:00.181198   29606 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/ha-652395/id_rsa Username:docker}
	I0802 17:55:00.334805   29606 ssh_runner.go:195] Run: systemctl --version
	I0802 17:55:00.344087   29606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 17:55:00.496043   29606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 17:55:00.501429   29606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 17:55:00.501493   29606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 17:55:00.510411   29606 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0802 17:55:00.510436   29606 start.go:495] detecting cgroup driver to use...
	I0802 17:55:00.510505   29606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 17:55:00.526229   29606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 17:55:00.540873   29606 docker.go:217] disabling cri-docker service (if available) ...
	I0802 17:55:00.540928   29606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 17:55:00.554608   29606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 17:55:00.568179   29606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 17:55:00.715955   29606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 17:55:00.856815   29606 docker.go:233] disabling docker service ...
	I0802 17:55:00.856879   29606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 17:55:00.872630   29606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 17:55:00.885780   29606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 17:55:01.027040   29606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 17:55:01.169656   29606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 17:55:01.184009   29606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 17:55:01.204128   29606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 17:55:01.204202   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.214292   29606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 17:55:01.214362   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.224034   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.233747   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.243864   29606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 17:55:01.253727   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.263338   29606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.273983   29606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 17:55:01.284106   29606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 17:55:01.292917   29606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 17:55:01.302295   29606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:55:01.440061   29606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 17:55:01.724369   29606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 17:55:01.724443   29606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 17:55:01.729937   29606 start.go:563] Will wait 60s for crictl version
	I0802 17:55:01.730002   29606 ssh_runner.go:195] Run: which crictl
	I0802 17:55:01.733602   29606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 17:55:01.768060   29606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 17:55:01.768147   29606 ssh_runner.go:195] Run: crio --version
	I0802 17:55:01.795814   29606 ssh_runner.go:195] Run: crio --version
	I0802 17:55:01.825284   29606 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 17:55:01.826741   29606 main.go:141] libmachine: (ha-652395) Calling .GetIP
	I0802 17:55:01.829259   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:01.829696   29606 main.go:141] libmachine: (ha-652395) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:3a:9a", ip: ""} in network mk-ha-652395: {Iface:virbr1 ExpiryTime:2024-08-02 18:43:41 +0000 UTC Type:0 Mac:52:54:00:ae:3a:9a Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-652395 Clientid:01:52:54:00:ae:3a:9a}
	I0802 17:55:01.829721   29606 main.go:141] libmachine: (ha-652395) DBG | domain ha-652395 has defined IP address 192.168.39.210 and MAC address 52:54:00:ae:3a:9a in network mk-ha-652395
	I0802 17:55:01.829918   29606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 17:55:01.834385   29606 kubeadm.go:883] updating cluster {Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.222 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 17:55:01.834510   29606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:55:01.834563   29606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:55:01.879763   29606 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 17:55:01.879782   29606 crio.go:433] Images already preloaded, skipping extraction
	I0802 17:55:01.879831   29606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 17:55:01.918899   29606 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 17:55:01.918922   29606 cache_images.go:84] Images are preloaded, skipping loading
	I0802 17:55:01.918931   29606 kubeadm.go:934] updating node { 192.168.39.210 8443 v1.30.3 crio true true} ...
	I0802 17:55:01.919041   29606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-652395 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 17:55:01.919121   29606 ssh_runner.go:195] Run: crio config
	I0802 17:55:01.966895   29606 cni.go:84] Creating CNI manager for ""
	I0802 17:55:01.966917   29606 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0802 17:55:01.966929   29606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 17:55:01.967008   29606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.210 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-652395 NodeName:ha-652395 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 17:55:01.967262   29606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.210
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-652395"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 17:55:01.967330   29606 kube-vip.go:115] generating kube-vip config ...
	I0802 17:55:01.967383   29606 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0802 17:55:01.978137   29606 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0802 17:55:01.978252   29606 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0802 17:55:01.978342   29606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 17:55:01.987305   29606 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 17:55:01.987423   29606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0802 17:55:01.996160   29606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0802 17:55:02.012102   29606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 17:55:02.027941   29606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0802 17:55:02.043509   29606 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0802 17:55:02.060749   29606 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0802 17:55:02.064815   29606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 17:55:02.212935   29606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 17:55:02.282071   29606 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395 for IP: 192.168.39.210
	I0802 17:55:02.282098   29606 certs.go:194] generating shared ca certs ...
	I0802 17:55:02.282119   29606 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:55:02.282345   29606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 17:55:02.282401   29606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 17:55:02.282424   29606 certs.go:256] generating profile certs ...
	I0802 17:55:02.282549   29606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/client.key
	I0802 17:55:02.282587   29606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.e64b3c7c
	I0802 17:55:02.282608   29606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.e64b3c7c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.210 192.168.39.220 192.168.39.62 192.168.39.254]
	I0802 17:55:02.436648   29606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.e64b3c7c ...
	I0802 17:55:02.436681   29606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.e64b3c7c: {Name:mk30a71839e34750fa7129e3bd9f1af0592219af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:55:02.436853   29606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.e64b3c7c ...
	I0802 17:55:02.436865   29606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.e64b3c7c: {Name:mkcf581c5b6beb3c065bad1c59e6accde21cde4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:55:02.436930   29606 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt.e64b3c7c -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt
	I0802 17:55:02.437081   29606 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key.e64b3c7c -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key
	I0802 17:55:02.437205   29606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key
	I0802 17:55:02.437225   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0802 17:55:02.437238   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0802 17:55:02.437250   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0802 17:55:02.437264   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0802 17:55:02.437282   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0802 17:55:02.437317   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0802 17:55:02.437335   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0802 17:55:02.437345   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0802 17:55:02.437399   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 17:55:02.437432   29606 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 17:55:02.437441   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 17:55:02.437460   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 17:55:02.437482   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 17:55:02.437503   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 17:55:02.437541   29606 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 17:55:02.437568   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem -> /usr/share/ca-certificates/12547.pem
	I0802 17:55:02.437582   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /usr/share/ca-certificates/125472.pem
	I0802 17:55:02.437594   29606 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:55:02.438119   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 17:55:02.694023   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 17:55:02.771506   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 17:55:02.879030   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 17:55:02.997040   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0802 17:55:03.139689   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 17:55:03.191837   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 17:55:03.286648   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/ha-652395/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 17:55:03.429902   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 17:55:03.504973   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 17:55:03.583177   29606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 17:55:03.621205   29606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 17:55:03.641979   29606 ssh_runner.go:195] Run: openssl version
	I0802 17:55:03.648479   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 17:55:03.664674   29606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 17:55:03.669556   29606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 17:55:03.669621   29606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 17:55:03.676464   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 17:55:03.693337   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 17:55:03.712266   29606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 17:55:03.723934   29606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 17:55:03.723995   29606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 17:55:03.733929   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 17:55:03.751717   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 17:55:03.775569   29606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:55:03.789593   29606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:55:03.789651   29606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 17:55:03.805417   29606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 17:55:03.819473   29606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 17:55:03.824725   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 17:55:03.833108   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 17:55:03.843492   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 17:55:03.852286   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 17:55:03.862183   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 17:55:03.872172   29606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 17:55:03.879528   29606 kubeadm.go:392] StartCluster: {Name:ha-652395 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-652395 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.210 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.62 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.222 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:55:03.879687   29606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 17:55:03.879748   29606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 17:55:03.936763   29606 cri.go:89] found id: "211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93"
	I0802 17:55:03.936785   29606 cri.go:89] found id: "4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d"
	I0802 17:55:03.936808   29606 cri.go:89] found id: "86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82"
	I0802 17:55:03.936813   29606 cri.go:89] found id: "bf950c5d12e435f630c4b4c3abcb6a81923d57812df9231b4238094d723c3c5c"
	I0802 17:55:03.936817   29606 cri.go:89] found id: "a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4"
	I0802 17:55:03.936822   29606 cri.go:89] found id: "c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6"
	I0802 17:55:03.936859   29606 cri.go:89] found id: "219d7f25bcfd6e77de5845534f7aaf968d2d78f12867c3527ea9e51c861bdaa8"
	I0802 17:55:03.936871   29606 cri.go:89] found id: "fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf"
	I0802 17:55:03.936876   29606 cri.go:89] found id: "26f9dbb4e53b68e2ae8d51792f99b7f0ed2cc9b696a1ff5456b7e7684f96f87f"
	I0802 17:55:03.936883   29606 cri.go:89] found id: "044a175eb50533624b826a3c1d1aa52bb8d46178b9a460454508b7721c870c20"
	I0802 17:55:03.936887   29606 cri.go:89] found id: "d809bfdbc457e4365c2eedbffa0f6ac8e940d0597edea05a183fb77ce8c6937d"
	I0802 17:55:03.936892   29606 cri.go:89] found id: "131024fd4f59ee579527315d5b100fb042ffd52f2030700b6c8d0d77872ee0e5"
	I0802 17:55:03.936897   29606 cri.go:89] found id: "c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e"
	I0802 17:55:03.936903   29606 cri.go:89] found id: "122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1"
	I0802 17:55:03.936911   29606 cri.go:89] found id: "e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1"
	I0802 17:55:03.936915   29606 cri.go:89] found id: "dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d"
	I0802 17:55:03.936919   29606 cri.go:89] found id: "a3c95a2e3488e52cee7451975bafbc0091727b32b47eac57ec5f1c730e2b77e6"
	I0802 17:55:03.936931   29606 cri.go:89] found id: "c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca"
	I0802 17:55:03.936938   29606 cri.go:89] found id: "fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1"
	I0802 17:55:03.936942   29606 cri.go:89] found id: ""
	I0802 17:55:03.936987   29606 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.407499894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621651407475720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06b201c6-bb90-43d4-8dcd-bb422f6f1edc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.408165432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7615cb38-c914-43fd-9f71-407dc0445cd0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.408221542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7615cb38-c914-43fd-9f71-407dc0445cd0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.408663373Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7d6d025dc3e8c1458f36dd96ba3669dda736544c57e2651dd182db499a629be,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722621384862059853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7d5f519122fc0e393279d94de214bed4cabe4208bf1906b83c79263052a52a,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722621374866993808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0b5e102e9fbadea3e3c0104ad4c5398e9b7b7c25600a93f4dd759b6b425a1,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722621356858106032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722621344859737839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8effce7b51652c72ad93455ab4157ba7bad4e23466ab47df9170367cf0f6bf3a,PodSandboxId:5aa357a4cd3197f10b4c75df55d57d7c5a5904b7b2f2dd5e6cf9b511a7d2adc3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722621336146103449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e125f9f2e129e9b4cdf81d79c93193ed41662eab1d95610accfb7b8b24d88a5,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722621329857387182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d0a59311a1ca72e6192fb90233a279bd12fd5f8830d77341397664b0ffc5bd,PodSandboxId:80cea65b465eee4b30484f0dcb6d09e7d506d6d41378739f80b6cd26af9e80c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722621318661187681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d91cad7af9509d134761d7a124551,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722621303333952441,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93,PodSandboxId:fd6f55a18f711e046686b51d3c95c93b9a247566a863611e18d5ce485b3bf9cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722621303338825108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82,PodSandboxId:016ccc975574701510dddec56eafd3ce51bdab0008015e3f7c4c7107427c4945,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722621303230901042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf950c5d12e435f630c4b4c3abcb6a8
1923d57812df9231b4238094d723c3c5c,PodSandboxId:87acad60b8a8730be58c0d88ea8de02091f8644e2fa012b161c4863176726b41,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722621302973026696,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4,
PodSandboxId:03e1788bd730df53342906be7d58e184c84d923f9dc4f99a879ff16c703ae995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722621302958928266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6,PodSandboxId:6c4e1481ad362c4d14cb
ca4551d4efa32dd8abd389043c0e1419f36d541043b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302861002758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf,PodSandboxId:8608d21543358f2b9c4d6560a419e974a9cb7c9aa201d7582ad42ef2643b461e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302774016319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722620817831344179,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673204775483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubern
etes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673178729183,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722620661418686737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722620657834190040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722620638641748836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722620638599921977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7615cb38-c914-43fd-9f71-407dc0445cd0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.450981368Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d411d83b-b199-414a-9c5f-2a01ab2fd845 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.451059900Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d411d83b-b199-414a-9c5f-2a01ab2fd845 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.452266339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e726e16d-8f46-45d4-bc72-9b270ad42741 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.452876996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621651452852317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e726e16d-8f46-45d4-bc72-9b270ad42741 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.453542903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81b0c2c1-3520-4ebb-993e-437886962cc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.453599209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81b0c2c1-3520-4ebb-993e-437886962cc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.454022400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7d6d025dc3e8c1458f36dd96ba3669dda736544c57e2651dd182db499a629be,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722621384862059853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7d5f519122fc0e393279d94de214bed4cabe4208bf1906b83c79263052a52a,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722621374866993808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0b5e102e9fbadea3e3c0104ad4c5398e9b7b7c25600a93f4dd759b6b425a1,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722621356858106032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722621344859737839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8effce7b51652c72ad93455ab4157ba7bad4e23466ab47df9170367cf0f6bf3a,PodSandboxId:5aa357a4cd3197f10b4c75df55d57d7c5a5904b7b2f2dd5e6cf9b511a7d2adc3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722621336146103449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e125f9f2e129e9b4cdf81d79c93193ed41662eab1d95610accfb7b8b24d88a5,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722621329857387182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d0a59311a1ca72e6192fb90233a279bd12fd5f8830d77341397664b0ffc5bd,PodSandboxId:80cea65b465eee4b30484f0dcb6d09e7d506d6d41378739f80b6cd26af9e80c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722621318661187681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d91cad7af9509d134761d7a124551,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722621303333952441,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93,PodSandboxId:fd6f55a18f711e046686b51d3c95c93b9a247566a863611e18d5ce485b3bf9cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722621303338825108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82,PodSandboxId:016ccc975574701510dddec56eafd3ce51bdab0008015e3f7c4c7107427c4945,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722621303230901042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf950c5d12e435f630c4b4c3abcb6a8
1923d57812df9231b4238094d723c3c5c,PodSandboxId:87acad60b8a8730be58c0d88ea8de02091f8644e2fa012b161c4863176726b41,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722621302973026696,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4,
PodSandboxId:03e1788bd730df53342906be7d58e184c84d923f9dc4f99a879ff16c703ae995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722621302958928266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6,PodSandboxId:6c4e1481ad362c4d14cb
ca4551d4efa32dd8abd389043c0e1419f36d541043b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302861002758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf,PodSandboxId:8608d21543358f2b9c4d6560a419e974a9cb7c9aa201d7582ad42ef2643b461e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302774016319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722620817831344179,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673204775483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubern
etes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673178729183,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722620661418686737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722620657834190040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722620638641748836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722620638599921977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81b0c2c1-3520-4ebb-993e-437886962cc5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.497566119Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0d684ac-8ccc-4ded-9cb3-875572485c26 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.497662691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0d684ac-8ccc-4ded-9cb3-875572485c26 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.498992651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=764b54fc-bf16-4848-ac52-2f55b4247962 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.499462628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621651499398449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=764b54fc-bf16-4848-ac52-2f55b4247962 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.500086504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e93300f4-136e-4e61-9381-2744be6e6b2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.500140962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e93300f4-136e-4e61-9381-2744be6e6b2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.500588614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7d6d025dc3e8c1458f36dd96ba3669dda736544c57e2651dd182db499a629be,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722621384862059853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7d5f519122fc0e393279d94de214bed4cabe4208bf1906b83c79263052a52a,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722621374866993808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0b5e102e9fbadea3e3c0104ad4c5398e9b7b7c25600a93f4dd759b6b425a1,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722621356858106032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722621344859737839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8effce7b51652c72ad93455ab4157ba7bad4e23466ab47df9170367cf0f6bf3a,PodSandboxId:5aa357a4cd3197f10b4c75df55d57d7c5a5904b7b2f2dd5e6cf9b511a7d2adc3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722621336146103449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e125f9f2e129e9b4cdf81d79c93193ed41662eab1d95610accfb7b8b24d88a5,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722621329857387182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d0a59311a1ca72e6192fb90233a279bd12fd5f8830d77341397664b0ffc5bd,PodSandboxId:80cea65b465eee4b30484f0dcb6d09e7d506d6d41378739f80b6cd26af9e80c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722621318661187681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d91cad7af9509d134761d7a124551,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722621303333952441,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93,PodSandboxId:fd6f55a18f711e046686b51d3c95c93b9a247566a863611e18d5ce485b3bf9cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722621303338825108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82,PodSandboxId:016ccc975574701510dddec56eafd3ce51bdab0008015e3f7c4c7107427c4945,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722621303230901042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf950c5d12e435f630c4b4c3abcb6a8
1923d57812df9231b4238094d723c3c5c,PodSandboxId:87acad60b8a8730be58c0d88ea8de02091f8644e2fa012b161c4863176726b41,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722621302973026696,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4,
PodSandboxId:03e1788bd730df53342906be7d58e184c84d923f9dc4f99a879ff16c703ae995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722621302958928266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6,PodSandboxId:6c4e1481ad362c4d14cb
ca4551d4efa32dd8abd389043c0e1419f36d541043b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302861002758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf,PodSandboxId:8608d21543358f2b9c4d6560a419e974a9cb7c9aa201d7582ad42ef2643b461e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302774016319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722620817831344179,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673204775483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubern
etes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673178729183,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722620661418686737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722620657834190040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722620638641748836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722620638599921977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e93300f4-136e-4e61-9381-2744be6e6b2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.545240923Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eed014b8-b698-44c7-a072-fa35b6045237 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.545355322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eed014b8-b698-44c7-a072-fa35b6045237 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.546585340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a9c799b-e374-4606-bac5-caf718afb1aa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.547027211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722621651547003345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a9c799b-e374-4606-bac5-caf718afb1aa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.547623819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d09e5ec7-9e98-4599-83fd-903c558752e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.547693712Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d09e5ec7-9e98-4599-83fd-903c558752e7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:00:51 ha-652395 crio[3732]: time="2024-08-02 18:00:51.548098682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f7d6d025dc3e8c1458f36dd96ba3669dda736544c57e2651dd182db499a629be,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722621384862059853,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a7d5f519122fc0e393279d94de214bed4cabe4208bf1906b83c79263052a52a,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722621374866993808,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 4,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3d0b5e102e9fbadea3e3c0104ad4c5398e9b7b7c25600a93f4dd759b6b425a1,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722621356858106032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9,PodSandboxId:704bed46ab9f19498685194f2f3a6fc7dec741b9b7447e7844da7e74bc424c1f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722621344859737839,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b35503df9ee27b31247351a3b8b83f9c,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8effce7b51652c72ad93455ab4157ba7bad4e23466ab47df9170367cf0f6bf3a,PodSandboxId:5aa357a4cd3197f10b4c75df55d57d7c5a5904b7b2f2dd5e6cf9b511a7d2adc3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722621336146103449,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annotations:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e125f9f2e129e9b4cdf81d79c93193ed41662eab1d95610accfb7b8b24d88a5,PodSandboxId:df01db970890c825d82f855dc05198a418b9844ae2aa3385e3f4c922274e576a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722621329857387182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 149760da-f585-48bf-9cc8-63ff848cf3c8,},Annotations:map[string]string{io.kubernetes.container.hash: ef29fcd8,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d0a59311a1ca72e6192fb90233a279bd12fd5f8830d77341397664b0ffc5bd,PodSandboxId:80cea65b465eee4b30484f0dcb6d09e7d506d6d41378739f80b6cd26af9e80c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722621318661187681,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d83d91cad7af9509d134761d7a124551,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d,PodSandboxId:78be7e219081ea67125110fdab57465a399321d4b7eb68d8500d3621d30d5930,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722621303333952441,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8445990b47d8cfa9cb5c64d20f86596,},Annotations:map[string]string{io.kubernetes.container.hash: 13504d9b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93,PodSandboxId:fd6f55a18f711e046686b51d3c95c93b9a247566a863611e18d5ce485b3bf9cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722621303338825108,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82,PodSandboxId:016ccc975574701510dddec56eafd3ce51bdab0008015e3f7c4c7107427c4945,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722621303230901042,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf950c5d12e435f630c4b4c3abcb6a8
1923d57812df9231b4238094d723c3c5c,PodSandboxId:87acad60b8a8730be58c0d88ea8de02091f8644e2fa012b161c4863176726b41,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722621302973026696,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4,
PodSandboxId:03e1788bd730df53342906be7d58e184c84d923f9dc4f99a879ff16c703ae995,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722621302958928266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6,PodSandboxId:6c4e1481ad362c4d14cb
ca4551d4efa32dd8abd389043c0e1419f36d541043b0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302861002758,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf,PodSandboxId:8608d21543358f2b9c4d6560a419e974a9cb7c9aa201d7582ad42ef2643b461e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722621302774016319,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubernetes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fd869ff4b02dd1be22e9c5ec9da70cf6208b88a9f7214c3b3fdbb9a3b5286a4,PodSandboxId:e8db151d94a976526f3c03e7267087ec9793ea5356ac7d8a28ec2887fa6bc9b2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722620817831344179,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wwdvm,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d2d25e8-37d0-45c4-9b5a-9722d329d86f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 44e60a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e,PodSandboxId:236df4e4d374d4d28812bc9b1853531dda332dcdbc476bc1edb0c91e92fc30bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673204775483,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7bnn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4eedd91-fcf6-4cef-81b0-d043c38cc00c,},Annotations:map[string]string{io.kubern
etes.container.hash: 92e7f6b3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1,PodSandboxId:7a85af598179819732d5caa764cff2924b0c6e5460e5180c424920f004eb6ad0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722620673178729183,Labels:map[string]string{io.kubernetes.container.name: cored
ns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gzmsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5baa21b-dddf-43b6-a5a2-2b8f8e452a83,},Annotations:map[string]string{io.kubernetes.container.hash: ae44d3b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1,PodSandboxId:93bf8df122de4b077e35c99bfd5fae1b8b4161110a3eca610078b6907355bdda,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722620661418686737,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bjrkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d82e24-8aa1-4c71-b904-03b53de10142,},Annotations:map[string]string{io.kubernetes.container.hash: 754c099a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d,PodSandboxId:aa85cd011b1097fb479e33944d3a642849af0d1203c2453af3e20be90e589413,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722620657834190040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l7npk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8db2cf39-da2a-42f7-8f34-6cd8f61d0b08,},Annotations:map[string]string{io.kubernetes.container.hash: fe49bd25,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca,PodSandboxId:d14257a1927ee8e6822e802c07fe22d8289054c4b41fe98c59078f7d2353ed2a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b767
22eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722620638641748836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3c9c044aaa51f57cf98fff08c0c405f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1,PodSandboxId:540d9595b8d862eebf81e7a99edaac7ca057b0aa549d2e859ecd38d650ffc826,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0c
fd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722620638599921977,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-652395,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe06cf29caa5fbee7270b029a9ae89d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6fbdd18b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d09e5ec7-9e98-4599-83fd-903c558752e7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7d6d025dc3e8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   3                   704bed46ab9f1       kube-controller-manager-ha-652395
	2a7d5f519122f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   df01db970890c       storage-provisioner
	d3d0b5e102e9f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   78be7e219081e       kube-apiserver-ha-652395
	b764e2109a4e9       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   2                   704bed46ab9f1       kube-controller-manager-ha-652395
	8effce7b51652       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   5aa357a4cd319       busybox-fc5497c4f-wwdvm
	9e125f9f2e129       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   df01db970890c       storage-provisioner
	75d0a59311a1c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   80cea65b465ee       kube-vip-ha-652395
	211084ef30ab2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   fd6f55a18f711       kube-scheduler-ha-652395
	4c17f3881c093       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   78be7e219081e       kube-apiserver-ha-652395
	86e9c6b3f3798       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   016ccc9755747       etcd-ha-652395
	bf950c5d12e43       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   87acad60b8a87       kindnet-bjrkb
	a6e31c0eb2882       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   03e1788bd730d       kube-proxy-l7npk
	c03f76e97b2f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   6c4e1481ad362       coredns-7db6d8ff4d-gzmsx
	fefd10fbf07b7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   8608d21543358       coredns-7db6d8ff4d-7bnn4
	8fd869ff4b02d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   e8db151d94a97       busybox-fc5497c4f-wwdvm
	c360a48ed21dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   236df4e4d374d       coredns-7db6d8ff4d-7bnn4
	122af758e0175       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   7a85af5981798       coredns-7db6d8ff4d-gzmsx
	e5737b2ef0345       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   93bf8df122de4       kindnet-bjrkb
	dbaf687f1fee9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   aa85cd011b109       kube-proxy-l7npk
	c587c6ce09941       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   d14257a1927ee       kube-scheduler-ha-652395
	fae5bea03ccdc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   540d9595b8d86       etcd-ha-652395
	
	
	==> coredns [122af758e017591aec64142abf5d0752bf8b31ee3416d4697be3769015e31ea1] <==
	[INFO] 10.244.0.4:56165 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000046005s
	[INFO] 10.244.0.4:44437 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000034828s
	[INFO] 10.244.0.4:35238 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000032154s
	[INFO] 10.244.1.2:56315 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166841s
	[INFO] 10.244.1.2:47239 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000198329s
	[INFO] 10.244.1.2:57096 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123709s
	[INFO] 10.244.2.2:46134 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000490913s
	[INFO] 10.244.2.2:53250 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148459s
	[INFO] 10.244.0.4:56093 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118253s
	[INFO] 10.244.0.4:34180 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00008075s
	[INFO] 10.244.0.4:45410 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005242s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1903&timeout=9m11s&timeoutSeconds=551&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[1554461129]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (02-Aug-2024 17:53:14.855) (total time: 12692ms):
	Trace[1554461129]: ---"Objects listed" error:Unauthorized 12692ms (17:53:27.547)
	Trace[1554461129]: [12.692708827s] [12.692708827s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c03f76e97b2f64ed6350e7755e4c3717eeb7f09825d9620c158ba65b15c2f8f6] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42778->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:42778->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c360a48ed21dd03cdd596daad23262091aaa088b217054f7da7d8a7daab0e13e] <==
	[INFO] 10.244.0.4:37426 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000138717s
	[INFO] 10.244.1.2:36979 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000118362s
	[INFO] 10.244.2.2:57363 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012985s
	[INFO] 10.244.2.2:39508 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130428s
	[INFO] 10.244.1.2:35447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118938s
	[INFO] 10.244.2.2:32993 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168916s
	[INFO] 10.244.2.2:41103 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000214849s
	[INFO] 10.244.0.4:36090 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133411s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1856&timeout=7m40s&timeoutSeconds=460&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1903&timeout=6m18s&timeoutSeconds=378&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[329143856]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (02-Aug-2024 17:53:14.884) (total time: 12661ms):
	Trace[329143856]: ---"Objects listed" error:Unauthorized 12661ms (17:53:27.545)
	Trace[329143856]: [12.661539453s] [12.661539453s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1310938926]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (02-Aug-2024 17:53:14.550) (total time: 12995ms):
	Trace[1310938926]: ---"Objects listed" error:Unauthorized 12995ms (17:53:27.545)
	Trace[1310938926]: [12.995424272s] [12.995424272s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1903": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fefd10fbf07b7a4e60d66d07b47d437dcb6a8423c4b8074bd916e2f7bc4446cf] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-652395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T17_44_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:44:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:00:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:55:50 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:55:50 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:55:50 +0000   Fri, 02 Aug 2024 17:44:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:55:50 +0000   Fri, 02 Aug 2024 17:44:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-652395
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ba599bf07ef4e41ba86086b6ac2ff1a
	  System UUID:                5ba599bf-07ef-4e41-ba86-086b6ac2ff1a
	  Boot ID:                    ed33b037-d8f7-4cbf-a057-27f14a3cc7dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wwdvm              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-7bnn4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-gzmsx             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-652395                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-bjrkb                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-652395             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-652395    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-l7npk                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-652395             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-652395                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5m5s               kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node ha-652395 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node ha-652395 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node ha-652395 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                kubelet          Node ha-652395 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                kubelet          Node ha-652395 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                kubelet          Node ha-652395 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal   NodeReady                16m                kubelet          Node ha-652395 status is now: NodeReady
	  Normal   RegisteredNode           15m                node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Warning  ContainerGCFailed        6m47s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m57s              node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal   RegisteredNode           4m14s              node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	  Normal   RegisteredNode           3m10s              node-controller  Node ha-652395 event: Registered Node ha-652395 in Controller
	
	
	Name:               ha-652395-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T17_45_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:00:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 17:56:32 +0000   Fri, 02 Aug 2024 17:55:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 17:56:32 +0000   Fri, 02 Aug 2024 17:55:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 17:56:32 +0000   Fri, 02 Aug 2024 17:55:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 17:56:32 +0000   Fri, 02 Aug 2024 17:55:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-652395-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b4562f021ca54cf29302ae6053b176ca
	  System UUID:                b4562f02-1ca5-4cf2-9302-ae6053b176ca
	  Boot ID:                    a9ea8acb-21c4-41a3-adad-896284e4b57f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4gkm6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-652395-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-7n2wh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-652395-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-652395-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-rtbb6                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-652395-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-652395-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m43s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-652395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-652395-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-652395-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-652395-m02 status is now: NodeNotReady
	  Normal  Starting                 5m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m27s (x8 over 5m27s)  kubelet          Node ha-652395-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s (x8 over 5m27s)  kubelet          Node ha-652395-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x7 over 5m27s)  kubelet          Node ha-652395-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-652395-m02 event: Registered Node ha-652395-m02 in Controller
	
	
	Name:               ha-652395-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-652395-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=ha-652395
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T17_47_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 17:47:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-652395-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 17:58:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 02 Aug 2024 17:58:04 +0000   Fri, 02 Aug 2024 17:59:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 02 Aug 2024 17:58:04 +0000   Fri, 02 Aug 2024 17:59:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 02 Aug 2024 17:58:04 +0000   Fri, 02 Aug 2024 17:59:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 02 Aug 2024 17:58:04 +0000   Fri, 02 Aug 2024 17:59:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.222
	  Hostname:    ha-652395-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 998c02abf56b4784b82e5c48780cf7d3
	  System UUID:                998c02ab-f56b-4784-b82e-5c48780cf7d3
	  Boot ID:                    767ab23a-9b64-4543-b04a-3d734b32750a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-s545w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-nksdg              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-d44zn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-652395-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-652395-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-652395-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-652395-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m58s                  node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-652395-m04 event: Registered Node ha-652395-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-652395-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-652395-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-652395-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-652395-m04 has been rebooted, boot id: 767ab23a-9b64-4543-b04a-3d734b32750a
	  Normal   NodeReady                2m48s                  kubelet          Node ha-652395-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s (x2 over 4m18s)   node-controller  Node ha-652395-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +4.520223] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.851587] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.054661] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055410] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.166920] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.132294] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.235363] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +3.898825] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +3.781164] systemd-fstab-generator[933]: Ignoring "noauto" option for root device
	[  +0.056602] kauditd_printk_skb: 158 callbacks suppressed
	[Aug 2 17:44] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.095134] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.851149] kauditd_printk_skb: 18 callbacks suppressed
	[ +21.579996] kauditd_printk_skb: 38 callbacks suppressed
	[Aug 2 17:45] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 2 17:55] systemd-fstab-generator[3653]: Ignoring "noauto" option for root device
	[  +0.145049] systemd-fstab-generator[3665]: Ignoring "noauto" option for root device
	[  +0.174682] systemd-fstab-generator[3679]: Ignoring "noauto" option for root device
	[  +0.136281] systemd-fstab-generator[3691]: Ignoring "noauto" option for root device
	[  +0.271418] systemd-fstab-generator[3719]: Ignoring "noauto" option for root device
	[  +0.768336] systemd-fstab-generator[3829]: Ignoring "noauto" option for root device
	[  +4.304021] kauditd_printk_skb: 223 callbacks suppressed
	[ +38.522867] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [86e9c6b3f3798c3bf3aaadc23d369330eba5b30cf4d21fe0062671138e497d82] <==
	{"level":"info","ts":"2024-08-02T17:57:24.3743Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:57:24.393626Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:57:24.393667Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"5a5dd032def1271d","to":"254930c0dd0c8ee9","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-02T17:57:24.393886Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:57:24.407577Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"5a5dd032def1271d","to":"254930c0dd0c8ee9","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-02T17:57:24.407617Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:58:18.051052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5a5dd032def1271d switched to configuration voters=(497946097356001769 6511589553154893597)"}
	{"level":"info","ts":"2024-08-02T17:58:18.054693Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"989b3f6bb1f1f8ce","local-member-id":"5a5dd032def1271d","removed-remote-peer-id":"254930c0dd0c8ee9","removed-remote-peer-urls":["https://192.168.39.62:2380"]}
	{"level":"warn","ts":"2024-08-02T17:58:18.054864Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"5a5dd032def1271d","removed-member-id":"254930c0dd0c8ee9"}
	{"level":"warn","ts":"2024-08-02T17:58:18.054924Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-08-02T17:58:18.054808Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"warn","ts":"2024-08-02T17:58:18.055524Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:58:18.055583Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"warn","ts":"2024-08-02T17:58:18.055783Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:58:18.055819Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:58:18.055946Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"warn","ts":"2024-08-02T17:58:18.056245Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9","error":"context canceled"}
	{"level":"warn","ts":"2024-08-02T17:58:18.056321Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"254930c0dd0c8ee9","error":"failed to read 254930c0dd0c8ee9 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-02T17:58:18.056378Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"warn","ts":"2024-08-02T17:58:18.05677Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9","error":"context canceled"}
	{"level":"info","ts":"2024-08-02T17:58:18.056812Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:58:18.05683Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:58:18.056845Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"5a5dd032def1271d","removed-remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"warn","ts":"2024-08-02T17:58:18.066189Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"5a5dd032def1271d","remote-peer-id-stream-handler":"5a5dd032def1271d","remote-peer-id-from":"254930c0dd0c8ee9"}
	{"level":"warn","ts":"2024-08-02T17:58:18.076917Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"5a5dd032def1271d","remote-peer-id-stream-handler":"5a5dd032def1271d","remote-peer-id-from":"254930c0dd0c8ee9"}
	
	
	==> etcd [fae5bea03ccdc2c83eb0f0f0cfbcafa4c9ba40a805d1abae9ffb30592802b1a1] <==
	{"level":"info","ts":"2024-08-02T17:53:29.078677Z","caller":"traceutil/trace.go:171","msg":"trace[2003454795] range","detail":"{range_begin:/registry/services/endpoints/; range_end:/registry/services/endpoints0; }","duration":"116.648752ms","start":"2024-08-02T17:53:28.962025Z","end":"2024-08-02T17:53:29.078674Z","steps":["trace[2003454795] 'agreement among raft nodes before linearized reading'  (duration: 109.327583ms)"],"step_count":1}
	2024/08/02 17:53:29 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-08-02T17:53:29.077141Z","caller":"traceutil/trace.go:171","msg":"trace[2114495648] range","detail":"{range_begin:/registry/limitranges/; range_end:/registry/limitranges0; }","duration":"706.494712ms","start":"2024-08-02T17:53:28.370639Z","end":"2024-08-02T17:53:29.077133Z","steps":["trace[2114495648] 'agreement among raft nodes before linearized reading'  (duration: 683.586881ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T17:53:29.078843Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T17:53:28.370619Z","time spent":"708.2146ms","remote":"127.0.0.1:39664","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:500 "}
	2024/08/02 17:53:29 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-02T17:53:29.107642Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T17:53:29.107729Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.210:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-02T17:53:29.107802Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"5a5dd032def1271d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-02T17:53:29.107935Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.107962Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108001Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108122Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108195Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108248Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5a5dd032def1271d","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108271Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6e90f565a3251e9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108279Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108287Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108304Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108341Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.10838Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108419Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"5a5dd032def1271d","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.108478Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"254930c0dd0c8ee9"}
	{"level":"info","ts":"2024-08-02T17:53:29.110873Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2024-08-02T17:53:29.110968Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.210:2380"}
	{"level":"info","ts":"2024-08-02T17:53:29.110989Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-652395","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.210:2380"],"advertise-client-urls":["https://192.168.39.210:2379"]}
	
	
	==> kernel <==
	 18:00:52 up 17 min,  0 users,  load average: 0.15, 0.27, 0.23
	Linux ha-652395 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf950c5d12e435f630c4b4c3abcb6a81923d57812df9231b4238094d723c3c5c] <==
	I0802 18:00:04.324180       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 18:00:14.332902       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 18:00:14.332973       1 main.go:299] handling current node
	I0802 18:00:14.333019       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 18:00:14.333028       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 18:00:14.333285       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 18:00:14.333315       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 18:00:24.331340       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 18:00:24.332165       1 main.go:299] handling current node
	I0802 18:00:24.332296       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 18:00:24.332373       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 18:00:24.332590       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 18:00:24.332623       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 18:00:34.327576       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 18:00:34.327707       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 18:00:34.327895       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 18:00:34.327935       1 main.go:299] handling current node
	I0802 18:00:34.328014       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 18:00:34.328040       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 18:00:44.332630       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 18:00:44.332705       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 18:00:44.332902       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 18:00:44.332925       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 18:00:44.332997       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 18:00:44.333015       1 main.go:299] handling current node
	
	
	==> kindnet [e5737b2ef0345a82c168e43d9eb8978ad14f3b88148b70bea56d97ccbd04b6b1] <==
	I0802 17:53:02.519607       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:53:02.519713       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:53:02.519949       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:53:02.519983       1 main.go:299] handling current node
	I0802 17:53:02.520006       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:53:02.520022       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:53:02.520117       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:53:02.520136       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:53:12.519366       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:53:12.519470       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:53:12.519629       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:53:12.519650       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	I0802 17:53:12.519717       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:53:12.519735       1 main.go:299] handling current node
	I0802 17:53:12.519751       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:53:12.519767       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:53:22.519506       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0802 17:53:22.519624       1 main.go:299] handling current node
	I0802 17:53:22.519661       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0802 17:53:22.519680       1 main.go:322] Node ha-652395-m02 has CIDR [10.244.1.0/24] 
	I0802 17:53:22.519836       1 main.go:295] Handling node with IPs: map[192.168.39.62:{}]
	I0802 17:53:22.519859       1 main.go:322] Node ha-652395-m03 has CIDR [10.244.2.0/24] 
	I0802 17:53:22.519930       1 main.go:295] Handling node with IPs: map[192.168.39.222:{}]
	I0802 17:53:22.519948       1 main.go:322] Node ha-652395-m04 has CIDR [10.244.3.0/24] 
	E0802 17:53:27.550673       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kube-apiserver [4c17f3881c093f3d456f67050b5308a186c347dce9aa46e3d694a3856aa7a70d] <==
	I0802 17:55:04.019665       1 options.go:221] external host was not specified, using 192.168.39.210
	I0802 17:55:04.022291       1 server.go:148] Version: v1.30.3
	I0802 17:55:04.022507       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:55:04.648065       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0802 17:55:04.648246       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0802 17:55:04.648847       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0802 17:55:04.648872       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0802 17:55:04.649053       1 instance.go:299] Using reconciler: lease
	W0802 17:55:24.640082       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0802 17:55:24.640082       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0802 17:55:24.650301       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [d3d0b5e102e9fbadea3e3c0104ad4c5398e9b7b7c25600a93f4dd759b6b425a1] <==
	I0802 17:55:58.690939       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0802 17:55:58.690995       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0802 17:55:58.691219       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0802 17:55:58.690001       1 establishing_controller.go:76] Starting EstablishingController
	I0802 17:55:58.690081       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0802 17:55:58.788979       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0802 17:55:58.789071       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0802 17:55:58.789300       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0802 17:55:58.790126       1 shared_informer.go:320] Caches are synced for configmaps
	I0802 17:55:58.790238       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0802 17:55:58.790286       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 17:55:58.790764       1 aggregator.go:165] initial CRD sync complete...
	I0802 17:55:58.790878       1 autoregister_controller.go:141] Starting autoregister controller
	I0802 17:55:58.791273       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0802 17:55:58.791313       1 cache.go:39] Caches are synced for autoregister controller
	I0802 17:55:58.791364       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0802 17:55:58.792191       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 17:55:58.796239       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0802 17:55:58.802488       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0802 17:55:58.802540       1 policy_source.go:224] refreshing policies
	I0802 17:55:58.826997       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 17:55:59.701537       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 17:56:32.375340       1 controller.go:615] quota admission added evaluator for: endpoints
	W0802 17:58:30.317769       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.210 192.168.39.220]
	I0802 17:58:30.327173       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9] <==
	I0802 17:55:45.335528       1 serving.go:380] Generated self-signed cert in-memory
	I0802 17:55:45.624045       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0802 17:55:45.624126       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:55:45.625686       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0802 17:55:45.626142       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0802 17:55:45.626848       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0802 17:55:45.627956       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0802 17:55:55.631823       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.210:8443/healthz\": dial tcp 192.168.39.210:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f7d6d025dc3e8c1458f36dd96ba3669dda736544c57e2651dd182db499a629be] <==
	I0802 17:58:14.852891       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.738441ms"
	I0802 17:58:14.887324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.373772ms"
	I0802 17:58:14.887489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.718µs"
	I0802 17:58:14.924159       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.746755ms"
	I0802 17:58:14.924522       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="112.693µs"
	I0802 17:58:16.787300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.029µs"
	I0802 17:58:16.943704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="180.273µs"
	I0802 17:58:16.960326       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.809µs"
	I0802 17:58:16.965614       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.985µs"
	I0802 17:58:18.549540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.3781ms"
	I0802 17:58:18.549635       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.449µs"
	I0802 17:58:29.482002       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-652395-m04"
	E0802 17:58:29.532493       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"ha-652395-m03", UID:"ab476c30-ca2b-4b56-a8a5-8ab3ddc9e9ce", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerW
ait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-652395-m03", UID:"0000bd1f-9f21-4332-8a13-932f0b4b1c74", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io "ha-652395-m03" not found
	E0802 17:58:37.247922       1 gc_controller.go:153] "Failed to get node" err="node \"ha-652395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-652395-m03"
	E0802 17:58:37.248360       1 gc_controller.go:153] "Failed to get node" err="node \"ha-652395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-652395-m03"
	E0802 17:58:37.248397       1 gc_controller.go:153] "Failed to get node" err="node \"ha-652395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-652395-m03"
	E0802 17:58:37.248476       1 gc_controller.go:153] "Failed to get node" err="node \"ha-652395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-652395-m03"
	E0802 17:58:37.248501       1 gc_controller.go:153] "Failed to get node" err="node \"ha-652395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-652395-m03"
	E0802 17:58:57.249646       1 gc_controller.go:153] "Failed to get node" err="node \"ha-652395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-652395-m03"
	E0802 17:58:57.249784       1 gc_controller.go:153] "Failed to get node" err="node \"ha-652395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-652395-m03"
	E0802 17:58:57.249811       1 gc_controller.go:153] "Failed to get node" err="node \"ha-652395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-652395-m03"
	E0802 17:58:57.249834       1 gc_controller.go:153] "Failed to get node" err="node \"ha-652395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-652395-m03"
	E0802 17:58:57.249857       1 gc_controller.go:153] "Failed to get node" err="node \"ha-652395-m03\" not found" logger="pod-garbage-collector-controller" node="ha-652395-m03"
	I0802 17:59:05.273040       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.160474ms"
	I0802 17:59:05.273167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.589µs"
	
	
	==> kube-proxy [a6e31c0eb2882db4a2d3ec45ae1b120a17e74e2247d94ce14170162ba9be69f4] <==
	E0802 17:55:27.779969       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-652395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0802 17:55:46.211478       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-652395\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0802 17:55:46.211541       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0802 17:55:46.244511       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 17:55:46.244576       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 17:55:46.244594       1 server_linux.go:165] "Using iptables Proxier"
	I0802 17:55:46.246825       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 17:55:46.247116       1 server.go:872] "Version info" version="v1.30.3"
	I0802 17:55:46.247143       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 17:55:46.248541       1 config.go:192] "Starting service config controller"
	I0802 17:55:46.248580       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 17:55:46.248608       1 config.go:101] "Starting endpoint slice config controller"
	I0802 17:55:46.248623       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 17:55:46.249335       1 config.go:319] "Starting node config controller"
	I0802 17:55:46.249358       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0802 17:55:49.284056       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0802 17:55:49.284317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:55:49.284537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:55:49.284620       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:55:49.284696       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:55:49.284744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:55:49.284831       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0802 17:55:50.149739       1 shared_informer.go:320] Caches are synced for service config
	I0802 17:55:50.250402       1 shared_informer.go:320] Caches are synced for node config
	I0802 17:55:50.649680       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [dbaf687f1fee9127637aa2d5a95902f6dcd48fce99aea0e15e2ed77bf2f76b2d] <==
	E0802 17:52:15.267607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:18.338782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:18.338898       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:18.338783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:18.338977       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:18.338849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:18.339047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:26.531757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:26.532974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:26.532370       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:26.533096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:26.533228       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:26.533326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:35.747616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:35.747708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:38.819471       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:38.819587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:38.819713       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:38.819769       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:57.251113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:57.251315       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1880": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:52:57.251512       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:52:57.251607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-652395&resourceVersion=1855": dial tcp 192.168.39.254:8443: connect: no route to host
	W0802 17:53:00.324137       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0802 17:53:00.324177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [211084ef30ab2dd9b950666459be6884bd1eb912bc1b75c181bdb6665fdd4c93] <==
	W0802 17:55:44.282505       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.210:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:44.282577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.210:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:45.149064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.210:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:45.149126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.210:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:45.892319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.210:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:45.892395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.210:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:45.984703       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.210:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:45.984839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.210:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:46.080723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.210:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:46.080816       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.210:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:46.166917       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.210:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:46.166956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.210:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:46.218732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.210:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:46.218817       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.210:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:47.088650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.210:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	E0802 17:55:47.088770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.210:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.210:8443: connect: connection refused
	W0802 17:55:58.714873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 17:55:58.715064       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 17:55:58.715268       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 17:55:58.715358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0802 17:56:06.265643       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0802 17:58:14.745094       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-s545w\": pod busybox-fc5497c4f-s545w is already assigned to node \"ha-652395-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-s545w" node="ha-652395-m04"
	E0802 17:58:14.745231       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3e9d4017-07b4-4bdf-a5e4-1aeb208c01ca(default/busybox-fc5497c4f-s545w) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-s545w"
	E0802 17:58:14.745275       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-s545w\": pod busybox-fc5497c4f-s545w is already assigned to node \"ha-652395-m04\"" pod="default/busybox-fc5497c4f-s545w"
	I0802 17:58:14.745305       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-s545w" node="ha-652395-m04"
	
	
	==> kube-scheduler [c587c6ce0994151320d64d8d911e8b76ed3fb29a9bcfc589a5c305eadc9e7eca] <==
	W0802 17:53:21.976372       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 17:53:21.976582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 17:53:22.157002       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 17:53:22.157099       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 17:53:22.162331       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 17:53:22.162416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 17:53:22.682516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 17:53:22.682605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0802 17:53:22.712918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 17:53:22.712959       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 17:53:22.908843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 17:53:22.908992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 17:53:22.923265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 17:53:22.923360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 17:53:23.151794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 17:53:23.151868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0802 17:53:23.461038       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 17:53:23.461080       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 17:53:24.539079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 17:53:24.539160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 17:53:27.622409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 17:53:27.622509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0802 17:53:28.888666       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 17:53:28.888720       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 17:53:29.034582       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 02 17:56:24 ha-652395 kubelet[1358]: I0802 17:56:24.847091    1358 scope.go:117] "RemoveContainer" containerID="b764e2109a4e9d31a1465683649d33cac6639e79e06d0624313148e16bb07ca9"
	Aug 02 17:56:45 ha-652395 kubelet[1358]: I0802 17:56:45.845323    1358 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-652395" podUID="1ee810a9-9d93-4cff-a5bb-60bab005eb5c"
	Aug 02 17:56:45 ha-652395 kubelet[1358]: I0802 17:56:45.863900    1358 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-652395"
	Aug 02 17:56:46 ha-652395 kubelet[1358]: I0802 17:56:46.685108    1358 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-652395" podUID="1ee810a9-9d93-4cff-a5bb-60bab005eb5c"
	Aug 02 17:56:54 ha-652395 kubelet[1358]: I0802 17:56:54.863413    1358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-652395" podStartSLOduration=9.863367292 podStartE2EDuration="9.863367292s" podCreationTimestamp="2024-08-02 17:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-02 17:56:54.863211047 +0000 UTC m=+770.173561401" watchObservedRunningTime="2024-08-02 17:56:54.863367292 +0000 UTC m=+770.173717648"
	Aug 02 17:57:04 ha-652395 kubelet[1358]: E0802 17:57:04.857837    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:57:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:57:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:57:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:57:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:58:04 ha-652395 kubelet[1358]: E0802 17:58:04.858144    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:58:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:58:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:58:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:58:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 17:59:04 ha-652395 kubelet[1358]: E0802 17:59:04.857221    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 17:59:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 17:59:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 17:59:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 17:59:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 18:00:04 ha-652395 kubelet[1358]: E0802 18:00:04.857736    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 18:00:04 ha-652395 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 18:00:04 ha-652395 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 18:00:04 ha-652395 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 18:00:04 ha-652395 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:00:51.152659   32109 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19355-5397/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-652395 -n ha-652395
helpers_test.go:261: (dbg) Run:  kubectl --context ha-652395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (334.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-250383
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-250383
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-250383: exit status 82 (2m1.710421966s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-250383-m03"  ...
	* Stopping node "multinode-250383-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-250383" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-250383 --wait=true -v=8 --alsologtostderr
E0802 18:17:43.928121   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 18:20:14.261944   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 18:20:46.974325   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-250383 --wait=true -v=8 --alsologtostderr: (3m30.876621317s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-250383
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-250383 -n multinode-250383
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-250383 logs -n 25: (1.452416171s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m02:/home/docker/cp-test.txt                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile879850024/001/cp-test_multinode-250383-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m02:/home/docker/cp-test.txt                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383:/home/docker/cp-test_multinode-250383-m02_multinode-250383.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n multinode-250383 sudo cat                                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /home/docker/cp-test_multinode-250383-m02_multinode-250383.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m02:/home/docker/cp-test.txt                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03:/home/docker/cp-test_multinode-250383-m02_multinode-250383-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n multinode-250383-m03 sudo cat                                   | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /home/docker/cp-test_multinode-250383-m02_multinode-250383-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp testdata/cp-test.txt                                                | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m03:/home/docker/cp-test.txt                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile879850024/001/cp-test_multinode-250383-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m03:/home/docker/cp-test.txt                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383:/home/docker/cp-test_multinode-250383-m03_multinode-250383.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n multinode-250383 sudo cat                                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /home/docker/cp-test_multinode-250383-m03_multinode-250383.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m03:/home/docker/cp-test.txt                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m02:/home/docker/cp-test_multinode-250383-m03_multinode-250383-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n multinode-250383-m02 sudo cat                                   | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /home/docker/cp-test_multinode-250383-m03_multinode-250383-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-250383 node stop m03                                                          | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	| node    | multinode-250383 node start                                                             | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:15 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-250383                                                                | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:15 UTC |                     |
	| stop    | -p multinode-250383                                                                     | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:15 UTC |                     |
	| start   | -p multinode-250383                                                                     | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:17 UTC | 02 Aug 24 18:20 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-250383                                                                | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:20 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:17:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:17:26.088966   41488 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:17:26.089225   41488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:17:26.089234   41488 out.go:304] Setting ErrFile to fd 2...
	I0802 18:17:26.089238   41488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:17:26.089402   41488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:17:26.089905   41488 out.go:298] Setting JSON to false
	I0802 18:17:26.090846   41488 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3590,"bootTime":1722619056,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:17:26.090905   41488 start.go:139] virtualization: kvm guest
	I0802 18:17:26.093329   41488 out.go:177] * [multinode-250383] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:17:26.094555   41488 notify.go:220] Checking for updates...
	I0802 18:17:26.094559   41488 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:17:26.095956   41488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:17:26.097217   41488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:17:26.098356   41488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:17:26.099530   41488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:17:26.100670   41488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:17:26.102131   41488 config.go:182] Loaded profile config "multinode-250383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:17:26.102211   41488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:17:26.102602   41488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:17:26.102655   41488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:17:26.118491   41488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0802 18:17:26.118891   41488 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:17:26.119451   41488 main.go:141] libmachine: Using API Version  1
	I0802 18:17:26.119474   41488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:17:26.119818   41488 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:17:26.120022   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:17:26.154923   41488 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:17:26.156120   41488 start.go:297] selected driver: kvm2
	I0802 18:17:26.156136   41488 start.go:901] validating driver "kvm2" against &{Name:multinode-250383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-250383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:17:26.156256   41488 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:17:26.156595   41488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:17:26.156660   41488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:17:26.170789   41488 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:17:26.171454   41488 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:17:26.171514   41488 cni.go:84] Creating CNI manager for ""
	I0802 18:17:26.171525   41488 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0802 18:17:26.171592   41488 start.go:340] cluster config:
	{Name:multinode-250383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-250383 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:17:26.171716   41488 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:17:26.174175   41488 out.go:177] * Starting "multinode-250383" primary control-plane node in "multinode-250383" cluster
	I0802 18:17:26.175398   41488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:17:26.175429   41488 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 18:17:26.175436   41488 cache.go:56] Caching tarball of preloaded images
	I0802 18:17:26.175509   41488 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:17:26.175519   41488 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 18:17:26.175625   41488 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/config.json ...
	I0802 18:17:26.175802   41488 start.go:360] acquireMachinesLock for multinode-250383: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:17:26.175840   41488 start.go:364] duration metric: took 20.542µs to acquireMachinesLock for "multinode-250383"
	I0802 18:17:26.175853   41488 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:17:26.175859   41488 fix.go:54] fixHost starting: 
	I0802 18:17:26.176145   41488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:17:26.176175   41488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:17:26.190418   41488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I0802 18:17:26.190806   41488 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:17:26.191272   41488 main.go:141] libmachine: Using API Version  1
	I0802 18:17:26.191295   41488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:17:26.191631   41488 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:17:26.191865   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:17:26.192025   41488 main.go:141] libmachine: (multinode-250383) Calling .GetState
	I0802 18:17:26.193653   41488 fix.go:112] recreateIfNeeded on multinode-250383: state=Running err=<nil>
	W0802 18:17:26.193674   41488 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:17:26.195547   41488 out.go:177] * Updating the running kvm2 "multinode-250383" VM ...
	I0802 18:17:26.196690   41488 machine.go:94] provisionDockerMachine start ...
	I0802 18:17:26.196707   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:17:26.196905   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.199456   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.199867   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.199895   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.200099   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:17:26.200274   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.200443   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.200571   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:17:26.200737   41488 main.go:141] libmachine: Using SSH client type: native
	I0802 18:17:26.200975   41488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0802 18:17:26.200990   41488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:17:26.304727   41488 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-250383
	
	I0802 18:17:26.304785   41488 main.go:141] libmachine: (multinode-250383) Calling .GetMachineName
	I0802 18:17:26.305080   41488 buildroot.go:166] provisioning hostname "multinode-250383"
	I0802 18:17:26.305108   41488 main.go:141] libmachine: (multinode-250383) Calling .GetMachineName
	I0802 18:17:26.305320   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.308069   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.308417   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.308447   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.308595   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:17:26.308746   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.308885   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.309034   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:17:26.309220   41488 main.go:141] libmachine: Using SSH client type: native
	I0802 18:17:26.309386   41488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0802 18:17:26.309401   41488 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-250383 && echo "multinode-250383" | sudo tee /etc/hostname
	I0802 18:17:26.422357   41488 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-250383
	
	I0802 18:17:26.422395   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.425066   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.425438   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.425469   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.425593   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:17:26.425781   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.425951   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.426094   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:17:26.426263   41488 main.go:141] libmachine: Using SSH client type: native
	I0802 18:17:26.426476   41488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0802 18:17:26.426493   41488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-250383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-250383/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-250383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:17:26.527895   41488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:17:26.527923   41488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:17:26.527967   41488 buildroot.go:174] setting up certificates
	I0802 18:17:26.527983   41488 provision.go:84] configureAuth start
	I0802 18:17:26.528000   41488 main.go:141] libmachine: (multinode-250383) Calling .GetMachineName
	I0802 18:17:26.528325   41488 main.go:141] libmachine: (multinode-250383) Calling .GetIP
	I0802 18:17:26.530779   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.531163   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.531193   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.531305   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.533165   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.533517   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.533545   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.533651   41488 provision.go:143] copyHostCerts
	I0802 18:17:26.533680   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:17:26.533719   41488 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:17:26.533729   41488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:17:26.533806   41488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:17:26.533917   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:17:26.533944   41488 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:17:26.533954   41488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:17:26.533994   41488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:17:26.534066   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:17:26.534092   41488 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:17:26.534101   41488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:17:26.534135   41488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:17:26.534213   41488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.multinode-250383 san=[127.0.0.1 192.168.39.67 localhost minikube multinode-250383]
	I0802 18:17:26.685070   41488 provision.go:177] copyRemoteCerts
	I0802 18:17:26.685132   41488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:17:26.685160   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.687446   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.687788   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.687813   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.687953   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:17:26.688138   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.688323   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:17:26.688473   41488 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/multinode-250383/id_rsa Username:docker}
	I0802 18:17:26.769150   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0802 18:17:26.769216   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:17:26.793229   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0802 18:17:26.793298   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0802 18:17:26.815945   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0802 18:17:26.816011   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:17:26.839453   41488 provision.go:87] duration metric: took 311.454127ms to configureAuth
	I0802 18:17:26.839488   41488 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:17:26.839726   41488 config.go:182] Loaded profile config "multinode-250383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:17:26.839790   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.842349   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.842732   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.842759   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.842963   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:17:26.843180   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.843334   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.843465   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:17:26.843576   41488 main.go:141] libmachine: Using SSH client type: native
	I0802 18:17:26.843794   41488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0802 18:17:26.843820   41488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:18:57.547830   41488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:18:57.547859   41488 machine.go:97] duration metric: took 1m31.351156538s to provisionDockerMachine
	I0802 18:18:57.547873   41488 start.go:293] postStartSetup for "multinode-250383" (driver="kvm2")
	I0802 18:18:57.547887   41488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:18:57.547910   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:18:57.548286   41488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:18:57.548333   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:18:57.551416   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.551832   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:18:57.551866   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.551978   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:18:57.552202   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:18:57.552390   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:18:57.552558   41488 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/multinode-250383/id_rsa Username:docker}
	I0802 18:18:57.634466   41488 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:18:57.638732   41488 command_runner.go:130] > NAME=Buildroot
	I0802 18:18:57.638751   41488 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0802 18:18:57.638757   41488 command_runner.go:130] > ID=buildroot
	I0802 18:18:57.638764   41488 command_runner.go:130] > VERSION_ID=2023.02.9
	I0802 18:18:57.638771   41488 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0802 18:18:57.638802   41488 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:18:57.638819   41488 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:18:57.638901   41488 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:18:57.638980   41488 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:18:57.638990   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /etc/ssl/certs/125472.pem
	I0802 18:18:57.639078   41488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:18:57.649063   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:18:57.671904   41488 start.go:296] duration metric: took 124.016972ms for postStartSetup
	I0802 18:18:57.671954   41488 fix.go:56] duration metric: took 1m31.49609375s for fixHost
	I0802 18:18:57.671982   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:18:57.674613   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.675029   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:18:57.675046   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.675228   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:18:57.675445   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:18:57.675641   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:18:57.675867   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:18:57.676076   41488 main.go:141] libmachine: Using SSH client type: native
	I0802 18:18:57.676233   41488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0802 18:18:57.676243   41488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 18:18:57.775629   41488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722622737.740075434
	
	I0802 18:18:57.775663   41488 fix.go:216] guest clock: 1722622737.740075434
	I0802 18:18:57.775670   41488 fix.go:229] Guest: 2024-08-02 18:18:57.740075434 +0000 UTC Remote: 2024-08-02 18:18:57.671960943 +0000 UTC m=+91.617649118 (delta=68.114491ms)
	I0802 18:18:57.775693   41488 fix.go:200] guest clock delta is within tolerance: 68.114491ms
	I0802 18:18:57.775699   41488 start.go:83] releasing machines lock for "multinode-250383", held for 1m31.599850585s
	I0802 18:18:57.775716   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:18:57.776190   41488 main.go:141] libmachine: (multinode-250383) Calling .GetIP
	I0802 18:18:57.778648   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.779074   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:18:57.779134   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.779302   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:18:57.779853   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:18:57.780027   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:18:57.780136   41488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:18:57.780176   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:18:57.780195   41488 ssh_runner.go:195] Run: cat /version.json
	I0802 18:18:57.780213   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:18:57.782716   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.782960   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.783017   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:18:57.783041   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.783217   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:18:57.783391   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:18:57.783444   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:18:57.783484   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.783512   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:18:57.783683   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:18:57.783706   41488 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/multinode-250383/id_rsa Username:docker}
	I0802 18:18:57.783847   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:18:57.783981   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:18:57.784120   41488 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/multinode-250383/id_rsa Username:docker}
	I0802 18:18:57.863814   41488 command_runner.go:130] > {"iso_version": "v1.33.1-1722420371-19355", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "7d72c3be84f92807e8ddb66796778c6727075dd6"}
	I0802 18:18:57.863998   41488 ssh_runner.go:195] Run: systemctl --version
	I0802 18:18:57.899634   41488 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0802 18:18:57.900337   41488 command_runner.go:130] > systemd 252 (252)
	I0802 18:18:57.900372   41488 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0802 18:18:57.900442   41488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:18:58.068546   41488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0802 18:18:58.077817   41488 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0802 18:18:58.078094   41488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:18:58.078168   41488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:18:58.087467   41488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0802 18:18:58.087492   41488 start.go:495] detecting cgroup driver to use...
	I0802 18:18:58.087543   41488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:18:58.103626   41488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:18:58.116340   41488 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:18:58.116400   41488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:18:58.129519   41488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:18:58.142471   41488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:18:58.287053   41488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:18:58.428960   41488 docker.go:233] disabling docker service ...
	I0802 18:18:58.429060   41488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:18:58.446663   41488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:18:58.460091   41488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:18:58.600929   41488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:18:58.761239   41488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:18:58.810722   41488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:18:58.845464   41488 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0802 18:18:58.845504   41488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 18:18:58.845546   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.860397   41488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:18:58.860460   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.878457   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.888663   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.898861   41488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:18:58.913226   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.924536   41488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.942722   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.953656   41488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:18:58.966199   41488 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0802 18:18:58.966280   41488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:18:58.978009   41488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:18:59.148179   41488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:19:09.361123   41488 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.212901234s)
	I0802 18:19:09.361162   41488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:19:09.361220   41488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:19:09.365885   41488 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0802 18:19:09.365905   41488 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0802 18:19:09.365912   41488 command_runner.go:130] > Device: 0,22	Inode: 1405        Links: 1
	I0802 18:19:09.365919   41488 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0802 18:19:09.365924   41488 command_runner.go:130] > Access: 2024-08-02 18:19:09.186228949 +0000
	I0802 18:19:09.365930   41488 command_runner.go:130] > Modify: 2024-08-02 18:19:09.186228949 +0000
	I0802 18:19:09.365937   41488 command_runner.go:130] > Change: 2024-08-02 18:19:09.186228949 +0000
	I0802 18:19:09.365947   41488 command_runner.go:130] >  Birth: -
	I0802 18:19:09.366078   41488 start.go:563] Will wait 60s for crictl version
	I0802 18:19:09.366121   41488 ssh_runner.go:195] Run: which crictl
	I0802 18:19:09.369606   41488 command_runner.go:130] > /usr/bin/crictl
	I0802 18:19:09.369658   41488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:19:09.403989   41488 command_runner.go:130] > Version:  0.1.0
	I0802 18:19:09.404011   41488 command_runner.go:130] > RuntimeName:  cri-o
	I0802 18:19:09.404016   41488 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0802 18:19:09.404020   41488 command_runner.go:130] > RuntimeApiVersion:  v1
	I0802 18:19:09.405022   41488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:19:09.405114   41488 ssh_runner.go:195] Run: crio --version
	I0802 18:19:09.430997   41488 command_runner.go:130] > crio version 1.29.1
	I0802 18:19:09.431019   41488 command_runner.go:130] > Version:        1.29.1
	I0802 18:19:09.431025   41488 command_runner.go:130] > GitCommit:      unknown
	I0802 18:19:09.431029   41488 command_runner.go:130] > GitCommitDate:  unknown
	I0802 18:19:09.431033   41488 command_runner.go:130] > GitTreeState:   clean
	I0802 18:19:09.431038   41488 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0802 18:19:09.431042   41488 command_runner.go:130] > GoVersion:      go1.21.6
	I0802 18:19:09.431048   41488 command_runner.go:130] > Compiler:       gc
	I0802 18:19:09.431054   41488 command_runner.go:130] > Platform:       linux/amd64
	I0802 18:19:09.431060   41488 command_runner.go:130] > Linkmode:       dynamic
	I0802 18:19:09.431066   41488 command_runner.go:130] > BuildTags:      
	I0802 18:19:09.431073   41488 command_runner.go:130] >   containers_image_ostree_stub
	I0802 18:19:09.431083   41488 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0802 18:19:09.431088   41488 command_runner.go:130] >   btrfs_noversion
	I0802 18:19:09.431093   41488 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0802 18:19:09.431107   41488 command_runner.go:130] >   libdm_no_deferred_remove
	I0802 18:19:09.431117   41488 command_runner.go:130] >   seccomp
	I0802 18:19:09.431149   41488 command_runner.go:130] > LDFlags:          unknown
	I0802 18:19:09.431157   41488 command_runner.go:130] > SeccompEnabled:   true
	I0802 18:19:09.431161   41488 command_runner.go:130] > AppArmorEnabled:  false
	I0802 18:19:09.432348   41488 ssh_runner.go:195] Run: crio --version
	I0802 18:19:09.459853   41488 command_runner.go:130] > crio version 1.29.1
	I0802 18:19:09.459881   41488 command_runner.go:130] > Version:        1.29.1
	I0802 18:19:09.459889   41488 command_runner.go:130] > GitCommit:      unknown
	I0802 18:19:09.459895   41488 command_runner.go:130] > GitCommitDate:  unknown
	I0802 18:19:09.459900   41488 command_runner.go:130] > GitTreeState:   clean
	I0802 18:19:09.459912   41488 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0802 18:19:09.459919   41488 command_runner.go:130] > GoVersion:      go1.21.6
	I0802 18:19:09.459925   41488 command_runner.go:130] > Compiler:       gc
	I0802 18:19:09.459929   41488 command_runner.go:130] > Platform:       linux/amd64
	I0802 18:19:09.459934   41488 command_runner.go:130] > Linkmode:       dynamic
	I0802 18:19:09.459944   41488 command_runner.go:130] > BuildTags:      
	I0802 18:19:09.459951   41488 command_runner.go:130] >   containers_image_ostree_stub
	I0802 18:19:09.459955   41488 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0802 18:19:09.459960   41488 command_runner.go:130] >   btrfs_noversion
	I0802 18:19:09.459964   41488 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0802 18:19:09.459972   41488 command_runner.go:130] >   libdm_no_deferred_remove
	I0802 18:19:09.459975   41488 command_runner.go:130] >   seccomp
	I0802 18:19:09.459980   41488 command_runner.go:130] > LDFlags:          unknown
	I0802 18:19:09.459983   41488 command_runner.go:130] > SeccompEnabled:   true
	I0802 18:19:09.459987   41488 command_runner.go:130] > AppArmorEnabled:  false
	I0802 18:19:09.464064   41488 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 18:19:09.465711   41488 main.go:141] libmachine: (multinode-250383) Calling .GetIP
	I0802 18:19:09.468327   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:19:09.468796   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:19:09.468822   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:19:09.469006   41488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 18:19:09.472907   41488 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0802 18:19:09.473117   41488 kubeadm.go:883] updating cluster {Name:multinode-250383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-250383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:19:09.473310   41488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:19:09.473371   41488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:19:09.518537   41488 command_runner.go:130] > {
	I0802 18:19:09.518563   41488 command_runner.go:130] >   "images": [
	I0802 18:19:09.518570   41488 command_runner.go:130] >     {
	I0802 18:19:09.518581   41488 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0802 18:19:09.518586   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.518591   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0802 18:19:09.518595   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518601   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.518627   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0802 18:19:09.518640   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0802 18:19:09.518646   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518654   41488 command_runner.go:130] >       "size": "87165492",
	I0802 18:19:09.518660   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.518667   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.518676   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.518686   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.518692   41488 command_runner.go:130] >     },
	I0802 18:19:09.518700   41488 command_runner.go:130] >     {
	I0802 18:19:09.518710   41488 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0802 18:19:09.518717   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.518726   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0802 18:19:09.518735   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518739   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.518748   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0802 18:19:09.518757   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0802 18:19:09.518761   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518767   41488 command_runner.go:130] >       "size": "87174707",
	I0802 18:19:09.518771   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.518789   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.518795   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.518799   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.518803   41488 command_runner.go:130] >     },
	I0802 18:19:09.518809   41488 command_runner.go:130] >     {
	I0802 18:19:09.518819   41488 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0802 18:19:09.518830   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.518837   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0802 18:19:09.518842   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518848   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.518859   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0802 18:19:09.518870   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0802 18:19:09.518875   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518882   41488 command_runner.go:130] >       "size": "1363676",
	I0802 18:19:09.518887   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.518894   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.518900   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.518906   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.518911   41488 command_runner.go:130] >     },
	I0802 18:19:09.518918   41488 command_runner.go:130] >     {
	I0802 18:19:09.518931   41488 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0802 18:19:09.518940   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.518951   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0802 18:19:09.518957   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518961   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.518970   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0802 18:19:09.518987   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0802 18:19:09.518994   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518998   41488 command_runner.go:130] >       "size": "31470524",
	I0802 18:19:09.519002   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.519006   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519012   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519016   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519022   41488 command_runner.go:130] >     },
	I0802 18:19:09.519025   41488 command_runner.go:130] >     {
	I0802 18:19:09.519033   41488 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0802 18:19:09.519041   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519048   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0802 18:19:09.519054   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519058   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519067   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0802 18:19:09.519076   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0802 18:19:09.519081   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519085   41488 command_runner.go:130] >       "size": "61245718",
	I0802 18:19:09.519094   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.519113   41488 command_runner.go:130] >       "username": "nonroot",
	I0802 18:19:09.519123   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519129   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519134   41488 command_runner.go:130] >     },
	I0802 18:19:09.519139   41488 command_runner.go:130] >     {
	I0802 18:19:09.519145   41488 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0802 18:19:09.519151   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519156   41488 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0802 18:19:09.519162   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519165   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519174   41488 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0802 18:19:09.519181   41488 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0802 18:19:09.519186   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519191   41488 command_runner.go:130] >       "size": "150779692",
	I0802 18:19:09.519196   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.519200   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.519205   41488 command_runner.go:130] >       },
	I0802 18:19:09.519209   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519214   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519218   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519223   41488 command_runner.go:130] >     },
	I0802 18:19:09.519226   41488 command_runner.go:130] >     {
	I0802 18:19:09.519234   41488 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0802 18:19:09.519238   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519245   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0802 18:19:09.519251   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519254   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519270   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0802 18:19:09.519280   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0802 18:19:09.519286   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519290   41488 command_runner.go:130] >       "size": "117609954",
	I0802 18:19:09.519295   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.519299   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.519305   41488 command_runner.go:130] >       },
	I0802 18:19:09.519308   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519315   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519319   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519324   41488 command_runner.go:130] >     },
	I0802 18:19:09.519328   41488 command_runner.go:130] >     {
	I0802 18:19:09.519335   41488 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0802 18:19:09.519339   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519347   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0802 18:19:09.519350   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519356   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519377   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0802 18:19:09.519387   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0802 18:19:09.519393   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519397   41488 command_runner.go:130] >       "size": "112198984",
	I0802 18:19:09.519402   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.519406   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.519412   41488 command_runner.go:130] >       },
	I0802 18:19:09.519416   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519420   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519423   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519426   41488 command_runner.go:130] >     },
	I0802 18:19:09.519429   41488 command_runner.go:130] >     {
	I0802 18:19:09.519434   41488 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0802 18:19:09.519438   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519442   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0802 18:19:09.519446   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519449   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519456   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0802 18:19:09.519462   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0802 18:19:09.519470   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519479   41488 command_runner.go:130] >       "size": "85953945",
	I0802 18:19:09.519482   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.519485   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519489   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519492   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519495   41488 command_runner.go:130] >     },
	I0802 18:19:09.519498   41488 command_runner.go:130] >     {
	I0802 18:19:09.519504   41488 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0802 18:19:09.519507   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519512   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0802 18:19:09.519518   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519522   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519531   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0802 18:19:09.519540   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0802 18:19:09.519545   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519549   41488 command_runner.go:130] >       "size": "63051080",
	I0802 18:19:09.519560   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.519566   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.519569   41488 command_runner.go:130] >       },
	I0802 18:19:09.519576   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519580   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519584   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519587   41488 command_runner.go:130] >     },
	I0802 18:19:09.519590   41488 command_runner.go:130] >     {
	I0802 18:19:09.519596   41488 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0802 18:19:09.519602   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519606   41488 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0802 18:19:09.519616   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519622   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519628   41488 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0802 18:19:09.519637   41488 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0802 18:19:09.519642   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519646   41488 command_runner.go:130] >       "size": "750414",
	I0802 18:19:09.519651   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.519655   41488 command_runner.go:130] >         "value": "65535"
	I0802 18:19:09.519665   41488 command_runner.go:130] >       },
	I0802 18:19:09.519670   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519676   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519680   41488 command_runner.go:130] >       "pinned": true
	I0802 18:19:09.519685   41488 command_runner.go:130] >     }
	I0802 18:19:09.519688   41488 command_runner.go:130] >   ]
	I0802 18:19:09.519693   41488 command_runner.go:130] > }
	I0802 18:19:09.519886   41488 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:19:09.519901   41488 crio.go:433] Images already preloaded, skipping extraction
	I0802 18:19:09.519945   41488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:19:09.551805   41488 command_runner.go:130] > {
	I0802 18:19:09.551825   41488 command_runner.go:130] >   "images": [
	I0802 18:19:09.551829   41488 command_runner.go:130] >     {
	I0802 18:19:09.551838   41488 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0802 18:19:09.551842   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.551848   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0802 18:19:09.551853   41488 command_runner.go:130] >       ],
	I0802 18:19:09.551857   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.551864   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0802 18:19:09.551871   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0802 18:19:09.551875   41488 command_runner.go:130] >       ],
	I0802 18:19:09.551879   41488 command_runner.go:130] >       "size": "87165492",
	I0802 18:19:09.551883   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.551886   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.551893   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.551901   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.551906   41488 command_runner.go:130] >     },
	I0802 18:19:09.551913   41488 command_runner.go:130] >     {
	I0802 18:19:09.551921   41488 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0802 18:19:09.551927   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.551939   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0802 18:19:09.551944   41488 command_runner.go:130] >       ],
	I0802 18:19:09.551951   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.551962   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0802 18:19:09.551973   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0802 18:19:09.551985   41488 command_runner.go:130] >       ],
	I0802 18:19:09.551991   41488 command_runner.go:130] >       "size": "87174707",
	I0802 18:19:09.551994   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.552001   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552005   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552009   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552013   41488 command_runner.go:130] >     },
	I0802 18:19:09.552016   41488 command_runner.go:130] >     {
	I0802 18:19:09.552022   41488 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0802 18:19:09.552027   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552031   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0802 18:19:09.552037   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552040   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552047   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0802 18:19:09.552057   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0802 18:19:09.552061   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552065   41488 command_runner.go:130] >       "size": "1363676",
	I0802 18:19:09.552069   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.552072   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552076   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552080   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552085   41488 command_runner.go:130] >     },
	I0802 18:19:09.552088   41488 command_runner.go:130] >     {
	I0802 18:19:09.552094   41488 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0802 18:19:09.552100   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552106   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0802 18:19:09.552109   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552112   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552120   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0802 18:19:09.552137   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0802 18:19:09.552143   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552147   41488 command_runner.go:130] >       "size": "31470524",
	I0802 18:19:09.552151   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.552155   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552158   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552162   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552177   41488 command_runner.go:130] >     },
	I0802 18:19:09.552181   41488 command_runner.go:130] >     {
	I0802 18:19:09.552186   41488 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0802 18:19:09.552190   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552194   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0802 18:19:09.552198   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552202   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552209   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0802 18:19:09.552218   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0802 18:19:09.552221   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552225   41488 command_runner.go:130] >       "size": "61245718",
	I0802 18:19:09.552229   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.552232   41488 command_runner.go:130] >       "username": "nonroot",
	I0802 18:19:09.552236   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552244   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552249   41488 command_runner.go:130] >     },
	I0802 18:19:09.552252   41488 command_runner.go:130] >     {
	I0802 18:19:09.552258   41488 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0802 18:19:09.552264   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552268   41488 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0802 18:19:09.552273   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552277   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552285   41488 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0802 18:19:09.552292   41488 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0802 18:19:09.552298   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552301   41488 command_runner.go:130] >       "size": "150779692",
	I0802 18:19:09.552305   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.552311   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.552314   41488 command_runner.go:130] >       },
	I0802 18:19:09.552319   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552322   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552327   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552330   41488 command_runner.go:130] >     },
	I0802 18:19:09.552334   41488 command_runner.go:130] >     {
	I0802 18:19:09.552341   41488 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0802 18:19:09.552345   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552359   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0802 18:19:09.552366   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552370   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552379   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0802 18:19:09.552389   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0802 18:19:09.552393   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552397   41488 command_runner.go:130] >       "size": "117609954",
	I0802 18:19:09.552403   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.552407   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.552411   41488 command_runner.go:130] >       },
	I0802 18:19:09.552415   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552419   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552423   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552426   41488 command_runner.go:130] >     },
	I0802 18:19:09.552429   41488 command_runner.go:130] >     {
	I0802 18:19:09.552436   41488 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0802 18:19:09.552442   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552447   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0802 18:19:09.552452   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552456   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552476   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0802 18:19:09.552486   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0802 18:19:09.552490   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552494   41488 command_runner.go:130] >       "size": "112198984",
	I0802 18:19:09.552497   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.552501   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.552504   41488 command_runner.go:130] >       },
	I0802 18:19:09.552508   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552512   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552521   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552527   41488 command_runner.go:130] >     },
	I0802 18:19:09.552530   41488 command_runner.go:130] >     {
	I0802 18:19:09.552536   41488 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0802 18:19:09.552542   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552547   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0802 18:19:09.552552   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552560   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552569   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0802 18:19:09.552579   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0802 18:19:09.552583   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552587   41488 command_runner.go:130] >       "size": "85953945",
	I0802 18:19:09.552593   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.552597   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552610   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552616   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552620   41488 command_runner.go:130] >     },
	I0802 18:19:09.552623   41488 command_runner.go:130] >     {
	I0802 18:19:09.552629   41488 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0802 18:19:09.552633   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552638   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0802 18:19:09.552643   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552647   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552654   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0802 18:19:09.552663   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0802 18:19:09.552669   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552673   41488 command_runner.go:130] >       "size": "63051080",
	I0802 18:19:09.552677   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.552681   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.552686   41488 command_runner.go:130] >       },
	I0802 18:19:09.552690   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552694   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552700   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552703   41488 command_runner.go:130] >     },
	I0802 18:19:09.552707   41488 command_runner.go:130] >     {
	I0802 18:19:09.552712   41488 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0802 18:19:09.552718   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552722   41488 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0802 18:19:09.552725   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552730   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552738   41488 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0802 18:19:09.552745   41488 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0802 18:19:09.552749   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552760   41488 command_runner.go:130] >       "size": "750414",
	I0802 18:19:09.552766   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.552770   41488 command_runner.go:130] >         "value": "65535"
	I0802 18:19:09.552774   41488 command_runner.go:130] >       },
	I0802 18:19:09.552777   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552781   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552785   41488 command_runner.go:130] >       "pinned": true
	I0802 18:19:09.552788   41488 command_runner.go:130] >     }
	I0802 18:19:09.552791   41488 command_runner.go:130] >   ]
	I0802 18:19:09.552795   41488 command_runner.go:130] > }
	I0802 18:19:09.552912   41488 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:19:09.552923   41488 cache_images.go:84] Images are preloaded, skipping loading
	I0802 18:19:09.552930   41488 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.30.3 crio true true} ...
	I0802 18:19:09.553048   41488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-250383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-250383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:19:09.553111   41488 ssh_runner.go:195] Run: crio config
	I0802 18:19:09.593607   41488 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0802 18:19:09.593641   41488 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0802 18:19:09.593651   41488 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0802 18:19:09.593676   41488 command_runner.go:130] > #
	I0802 18:19:09.593689   41488 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0802 18:19:09.593700   41488 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0802 18:19:09.593709   41488 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0802 18:19:09.593738   41488 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0802 18:19:09.593749   41488 command_runner.go:130] > # reload'.
	I0802 18:19:09.593759   41488 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0802 18:19:09.593769   41488 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0802 18:19:09.593780   41488 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0802 18:19:09.593792   41488 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0802 18:19:09.593808   41488 command_runner.go:130] > [crio]
	I0802 18:19:09.593821   41488 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0802 18:19:09.593830   41488 command_runner.go:130] > # containers images, in this directory.
	I0802 18:19:09.593842   41488 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0802 18:19:09.593856   41488 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0802 18:19:09.593867   41488 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0802 18:19:09.593881   41488 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0802 18:19:09.594022   41488 command_runner.go:130] > # imagestore = ""
	I0802 18:19:09.594047   41488 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0802 18:19:09.594059   41488 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0802 18:19:09.594127   41488 command_runner.go:130] > storage_driver = "overlay"
	I0802 18:19:09.594144   41488 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0802 18:19:09.594154   41488 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0802 18:19:09.594163   41488 command_runner.go:130] > storage_option = [
	I0802 18:19:09.594247   41488 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0802 18:19:09.594355   41488 command_runner.go:130] > ]
	I0802 18:19:09.594371   41488 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0802 18:19:09.594383   41488 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0802 18:19:09.594453   41488 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0802 18:19:09.594468   41488 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0802 18:19:09.594483   41488 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0802 18:19:09.594491   41488 command_runner.go:130] > # always happen on a node reboot
	I0802 18:19:09.594678   41488 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0802 18:19:09.594719   41488 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0802 18:19:09.594730   41488 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0802 18:19:09.594738   41488 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0802 18:19:09.594814   41488 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0802 18:19:09.594835   41488 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0802 18:19:09.594848   41488 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0802 18:19:09.595125   41488 command_runner.go:130] > # internal_wipe = true
	I0802 18:19:09.595146   41488 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0802 18:19:09.595155   41488 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0802 18:19:09.595410   41488 command_runner.go:130] > # internal_repair = false
	I0802 18:19:09.595429   41488 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0802 18:19:09.595438   41488 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0802 18:19:09.595448   41488 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0802 18:19:09.595588   41488 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0802 18:19:09.595612   41488 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0802 18:19:09.595619   41488 command_runner.go:130] > [crio.api]
	I0802 18:19:09.595627   41488 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0802 18:19:09.595807   41488 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0802 18:19:09.595821   41488 command_runner.go:130] > # IP address on which the stream server will listen.
	I0802 18:19:09.596178   41488 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0802 18:19:09.596193   41488 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0802 18:19:09.596199   41488 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0802 18:19:09.596383   41488 command_runner.go:130] > # stream_port = "0"
	I0802 18:19:09.596393   41488 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0802 18:19:09.596632   41488 command_runner.go:130] > # stream_enable_tls = false
	I0802 18:19:09.596649   41488 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0802 18:19:09.596825   41488 command_runner.go:130] > # stream_idle_timeout = ""
	I0802 18:19:09.596841   41488 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0802 18:19:09.596850   41488 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0802 18:19:09.596871   41488 command_runner.go:130] > # minutes.
	I0802 18:19:09.597048   41488 command_runner.go:130] > # stream_tls_cert = ""
	I0802 18:19:09.597067   41488 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0802 18:19:09.597076   41488 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0802 18:19:09.597216   41488 command_runner.go:130] > # stream_tls_key = ""
	I0802 18:19:09.597230   41488 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0802 18:19:09.597240   41488 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0802 18:19:09.597273   41488 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0802 18:19:09.597423   41488 command_runner.go:130] > # stream_tls_ca = ""
	I0802 18:19:09.597440   41488 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0802 18:19:09.597502   41488 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0802 18:19:09.597526   41488 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0802 18:19:09.597600   41488 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0802 18:19:09.597614   41488 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0802 18:19:09.597626   41488 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0802 18:19:09.597632   41488 command_runner.go:130] > [crio.runtime]
	I0802 18:19:09.597644   41488 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0802 18:19:09.597657   41488 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0802 18:19:09.597663   41488 command_runner.go:130] > # "nofile=1024:2048"
	I0802 18:19:09.597675   41488 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0802 18:19:09.597728   41488 command_runner.go:130] > # default_ulimits = [
	I0802 18:19:09.597849   41488 command_runner.go:130] > # ]
	I0802 18:19:09.597863   41488 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0802 18:19:09.598066   41488 command_runner.go:130] > # no_pivot = false
	I0802 18:19:09.598084   41488 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0802 18:19:09.598095   41488 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0802 18:19:09.598258   41488 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0802 18:19:09.598278   41488 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0802 18:19:09.598291   41488 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0802 18:19:09.598305   41488 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0802 18:19:09.598393   41488 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0802 18:19:09.598406   41488 command_runner.go:130] > # Cgroup setting for conmon
	I0802 18:19:09.598417   41488 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0802 18:19:09.598515   41488 command_runner.go:130] > conmon_cgroup = "pod"
	I0802 18:19:09.598533   41488 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0802 18:19:09.598542   41488 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0802 18:19:09.598554   41488 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0802 18:19:09.598563   41488 command_runner.go:130] > conmon_env = [
	I0802 18:19:09.598613   41488 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0802 18:19:09.598655   41488 command_runner.go:130] > ]
	I0802 18:19:09.598668   41488 command_runner.go:130] > # Additional environment variables to set for all the
	I0802 18:19:09.598676   41488 command_runner.go:130] > # containers. These are overridden if set in the
	I0802 18:19:09.598688   41488 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0802 18:19:09.598782   41488 command_runner.go:130] > # default_env = [
	I0802 18:19:09.598966   41488 command_runner.go:130] > # ]
	I0802 18:19:09.598976   41488 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0802 18:19:09.598993   41488 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0802 18:19:09.599212   41488 command_runner.go:130] > # selinux = false
	I0802 18:19:09.599228   41488 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0802 18:19:09.599238   41488 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0802 18:19:09.599248   41488 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0802 18:19:09.599399   41488 command_runner.go:130] > # seccomp_profile = ""
	I0802 18:19:09.599414   41488 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0802 18:19:09.599422   41488 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0802 18:19:09.599430   41488 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0802 18:19:09.599437   41488 command_runner.go:130] > # which might increase security.
	I0802 18:19:09.599444   41488 command_runner.go:130] > # This option is currently deprecated,
	I0802 18:19:09.599453   41488 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0802 18:19:09.599522   41488 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0802 18:19:09.599536   41488 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0802 18:19:09.599547   41488 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0802 18:19:09.599561   41488 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0802 18:19:09.599574   41488 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0802 18:19:09.599582   41488 command_runner.go:130] > # This option supports live configuration reload.
	I0802 18:19:09.599771   41488 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0802 18:19:09.599782   41488 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0802 18:19:09.599789   41488 command_runner.go:130] > # the cgroup blockio controller.
	I0802 18:19:09.600019   41488 command_runner.go:130] > # blockio_config_file = ""
	I0802 18:19:09.600033   41488 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0802 18:19:09.600038   41488 command_runner.go:130] > # blockio parameters.
	I0802 18:19:09.600246   41488 command_runner.go:130] > # blockio_reload = false
	I0802 18:19:09.600257   41488 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0802 18:19:09.600272   41488 command_runner.go:130] > # irqbalance daemon.
	I0802 18:19:09.600490   41488 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0802 18:19:09.600499   41488 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0802 18:19:09.600506   41488 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0802 18:19:09.600515   41488 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0802 18:19:09.600746   41488 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0802 18:19:09.600753   41488 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0802 18:19:09.600758   41488 command_runner.go:130] > # This option supports live configuration reload.
	I0802 18:19:09.600963   41488 command_runner.go:130] > # rdt_config_file = ""
	I0802 18:19:09.600976   41488 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0802 18:19:09.601099   41488 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0802 18:19:09.601150   41488 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0802 18:19:09.601278   41488 command_runner.go:130] > # separate_pull_cgroup = ""
	I0802 18:19:09.601288   41488 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0802 18:19:09.601294   41488 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0802 18:19:09.601297   41488 command_runner.go:130] > # will be added.
	I0802 18:19:09.601395   41488 command_runner.go:130] > # default_capabilities = [
	I0802 18:19:09.601525   41488 command_runner.go:130] > # 	"CHOWN",
	I0802 18:19:09.601662   41488 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0802 18:19:09.601795   41488 command_runner.go:130] > # 	"FSETID",
	I0802 18:19:09.601972   41488 command_runner.go:130] > # 	"FOWNER",
	I0802 18:19:09.602069   41488 command_runner.go:130] > # 	"SETGID",
	I0802 18:19:09.602180   41488 command_runner.go:130] > # 	"SETUID",
	I0802 18:19:09.602321   41488 command_runner.go:130] > # 	"SETPCAP",
	I0802 18:19:09.602471   41488 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0802 18:19:09.602603   41488 command_runner.go:130] > # 	"KILL",
	I0802 18:19:09.602720   41488 command_runner.go:130] > # ]
	I0802 18:19:09.602737   41488 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0802 18:19:09.602748   41488 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0802 18:19:09.602986   41488 command_runner.go:130] > # add_inheritable_capabilities = false
	I0802 18:19:09.603000   41488 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0802 18:19:09.603010   41488 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0802 18:19:09.603018   41488 command_runner.go:130] > default_sysctls = [
	I0802 18:19:09.603060   41488 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0802 18:19:09.603146   41488 command_runner.go:130] > ]
	I0802 18:19:09.603254   41488 command_runner.go:130] > # List of devices on the host that a
	I0802 18:19:09.603427   41488 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0802 18:19:09.605162   41488 command_runner.go:130] > # allowed_devices = [
	I0802 18:19:09.605179   41488 command_runner.go:130] > # 	"/dev/fuse",
	I0802 18:19:09.605185   41488 command_runner.go:130] > # ]
	I0802 18:19:09.605194   41488 command_runner.go:130] > # List of additional devices. specified as
	I0802 18:19:09.605206   41488 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0802 18:19:09.605214   41488 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0802 18:19:09.605228   41488 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0802 18:19:09.605238   41488 command_runner.go:130] > # additional_devices = [
	I0802 18:19:09.605243   41488 command_runner.go:130] > # ]
	I0802 18:19:09.605252   41488 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0802 18:19:09.605261   41488 command_runner.go:130] > # cdi_spec_dirs = [
	I0802 18:19:09.605270   41488 command_runner.go:130] > # 	"/etc/cdi",
	I0802 18:19:09.605277   41488 command_runner.go:130] > # 	"/var/run/cdi",
	I0802 18:19:09.605281   41488 command_runner.go:130] > # ]
	I0802 18:19:09.605288   41488 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0802 18:19:09.605300   41488 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0802 18:19:09.605311   41488 command_runner.go:130] > # Defaults to false.
	I0802 18:19:09.605319   41488 command_runner.go:130] > # device_ownership_from_security_context = false
	I0802 18:19:09.605332   41488 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0802 18:19:09.605345   41488 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0802 18:19:09.605354   41488 command_runner.go:130] > # hooks_dir = [
	I0802 18:19:09.605363   41488 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0802 18:19:09.605369   41488 command_runner.go:130] > # ]
	I0802 18:19:09.605376   41488 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0802 18:19:09.605389   41488 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0802 18:19:09.605401   41488 command_runner.go:130] > # its default mounts from the following two files:
	I0802 18:19:09.605409   41488 command_runner.go:130] > #
	I0802 18:19:09.605422   41488 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0802 18:19:09.605437   41488 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0802 18:19:09.605448   41488 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0802 18:19:09.605454   41488 command_runner.go:130] > #
	I0802 18:19:09.605461   41488 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0802 18:19:09.605474   41488 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0802 18:19:09.605487   41488 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0802 18:19:09.605500   41488 command_runner.go:130] > #      only add mounts it finds in this file.
	I0802 18:19:09.605509   41488 command_runner.go:130] > #
	I0802 18:19:09.605520   41488 command_runner.go:130] > # default_mounts_file = ""
	I0802 18:19:09.605529   41488 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0802 18:19:09.605541   41488 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0802 18:19:09.605546   41488 command_runner.go:130] > pids_limit = 1024
	I0802 18:19:09.605556   41488 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0802 18:19:09.605567   41488 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0802 18:19:09.605580   41488 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0802 18:19:09.605592   41488 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0802 18:19:09.605602   41488 command_runner.go:130] > # log_size_max = -1
	I0802 18:19:09.605612   41488 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0802 18:19:09.605620   41488 command_runner.go:130] > # log_to_journald = false
	I0802 18:19:09.605630   41488 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0802 18:19:09.605641   41488 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0802 18:19:09.605650   41488 command_runner.go:130] > # Path to directory for container attach sockets.
	I0802 18:19:09.605661   41488 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0802 18:19:09.605672   41488 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0802 18:19:09.605681   41488 command_runner.go:130] > # bind_mount_prefix = ""
	I0802 18:19:09.605691   41488 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0802 18:19:09.605700   41488 command_runner.go:130] > # read_only = false
	I0802 18:19:09.605709   41488 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0802 18:19:09.605717   41488 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0802 18:19:09.605726   41488 command_runner.go:130] > # live configuration reload.
	I0802 18:19:09.605733   41488 command_runner.go:130] > # log_level = "info"
	I0802 18:19:09.605745   41488 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0802 18:19:09.605756   41488 command_runner.go:130] > # This option supports live configuration reload.
	I0802 18:19:09.605766   41488 command_runner.go:130] > # log_filter = ""
	I0802 18:19:09.605776   41488 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0802 18:19:09.605787   41488 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0802 18:19:09.605794   41488 command_runner.go:130] > # separated by comma.
	I0802 18:19:09.605804   41488 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0802 18:19:09.605821   41488 command_runner.go:130] > # uid_mappings = ""
	I0802 18:19:09.605840   41488 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0802 18:19:09.605862   41488 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0802 18:19:09.605875   41488 command_runner.go:130] > # separated by comma.
	I0802 18:19:09.605895   41488 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0802 18:19:09.605913   41488 command_runner.go:130] > # gid_mappings = ""
	I0802 18:19:09.605928   41488 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0802 18:19:09.605939   41488 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0802 18:19:09.605954   41488 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0802 18:19:09.605962   41488 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0802 18:19:09.605967   41488 command_runner.go:130] > # minimum_mappable_uid = -1
	I0802 18:19:09.605976   41488 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0802 18:19:09.605990   41488 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0802 18:19:09.605999   41488 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0802 18:19:09.606014   41488 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0802 18:19:09.606023   41488 command_runner.go:130] > # minimum_mappable_gid = -1
	I0802 18:19:09.606033   41488 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0802 18:19:09.606045   41488 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0802 18:19:09.606052   41488 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0802 18:19:09.606056   41488 command_runner.go:130] > # ctr_stop_timeout = 30
	I0802 18:19:09.606064   41488 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0802 18:19:09.606077   41488 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0802 18:19:09.606087   41488 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0802 18:19:09.606098   41488 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0802 18:19:09.606105   41488 command_runner.go:130] > drop_infra_ctr = false
	I0802 18:19:09.606113   41488 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0802 18:19:09.606124   41488 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0802 18:19:09.606137   41488 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0802 18:19:09.606146   41488 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0802 18:19:09.606155   41488 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0802 18:19:09.606166   41488 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0802 18:19:09.606176   41488 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0802 18:19:09.606186   41488 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0802 18:19:09.606207   41488 command_runner.go:130] > # shared_cpuset = ""
	I0802 18:19:09.606219   41488 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0802 18:19:09.606230   41488 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0802 18:19:09.606238   41488 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0802 18:19:09.606253   41488 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0802 18:19:09.606264   41488 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0802 18:19:09.606275   41488 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0802 18:19:09.606288   41488 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0802 18:19:09.606295   41488 command_runner.go:130] > # enable_criu_support = false
	I0802 18:19:09.606301   41488 command_runner.go:130] > # Enable/disable the generation of the container,
	I0802 18:19:09.606313   41488 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0802 18:19:09.606324   41488 command_runner.go:130] > # enable_pod_events = false
	I0802 18:19:09.606336   41488 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0802 18:19:09.606348   41488 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0802 18:19:09.606359   41488 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0802 18:19:09.606366   41488 command_runner.go:130] > # default_runtime = "runc"
	I0802 18:19:09.606377   41488 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0802 18:19:09.606387   41488 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0802 18:19:09.606405   41488 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0802 18:19:09.606416   41488 command_runner.go:130] > # creation as a file is not desired either.
	I0802 18:19:09.606429   41488 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0802 18:19:09.606440   41488 command_runner.go:130] > # the hostname is being managed dynamically.
	I0802 18:19:09.606451   41488 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0802 18:19:09.606459   41488 command_runner.go:130] > # ]
	I0802 18:19:09.606467   41488 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0802 18:19:09.606479   41488 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0802 18:19:09.606492   41488 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0802 18:19:09.606503   41488 command_runner.go:130] > # Each entry in the table should follow the format:
	I0802 18:19:09.606511   41488 command_runner.go:130] > #
	I0802 18:19:09.606521   41488 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0802 18:19:09.606531   41488 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0802 18:19:09.606614   41488 command_runner.go:130] > # runtime_type = "oci"
	I0802 18:19:09.606632   41488 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0802 18:19:09.606638   41488 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0802 18:19:09.606645   41488 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0802 18:19:09.606656   41488 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0802 18:19:09.606665   41488 command_runner.go:130] > # monitor_env = []
	I0802 18:19:09.606676   41488 command_runner.go:130] > # privileged_without_host_devices = false
	I0802 18:19:09.606692   41488 command_runner.go:130] > # allowed_annotations = []
	I0802 18:19:09.606704   41488 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0802 18:19:09.606712   41488 command_runner.go:130] > # Where:
	I0802 18:19:09.606722   41488 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0802 18:19:09.606728   41488 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0802 18:19:09.606741   41488 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0802 18:19:09.606754   41488 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0802 18:19:09.606763   41488 command_runner.go:130] > #   in $PATH.
	I0802 18:19:09.606774   41488 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0802 18:19:09.606787   41488 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0802 18:19:09.606796   41488 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0802 18:19:09.606801   41488 command_runner.go:130] > #   state.
	I0802 18:19:09.606808   41488 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0802 18:19:09.606814   41488 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0802 18:19:09.606822   41488 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0802 18:19:09.606831   41488 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0802 18:19:09.606844   41488 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0802 18:19:09.606854   41488 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0802 18:19:09.606861   41488 command_runner.go:130] > #   The currently recognized values are:
	I0802 18:19:09.606872   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0802 18:19:09.606886   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0802 18:19:09.606895   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0802 18:19:09.606903   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0802 18:19:09.606917   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0802 18:19:09.606931   41488 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0802 18:19:09.606943   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0802 18:19:09.606958   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0802 18:19:09.606970   41488 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0802 18:19:09.606983   41488 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0802 18:19:09.606992   41488 command_runner.go:130] > #   deprecated option "conmon".
	I0802 18:19:09.607006   41488 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0802 18:19:09.607017   41488 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0802 18:19:09.607031   41488 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0802 18:19:09.607041   41488 command_runner.go:130] > #   should be moved to the container's cgroup
	I0802 18:19:09.607055   41488 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0802 18:19:09.607063   41488 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0802 18:19:09.607078   41488 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0802 18:19:09.607091   41488 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0802 18:19:09.607109   41488 command_runner.go:130] > #
	I0802 18:19:09.607119   41488 command_runner.go:130] > # Using the seccomp notifier feature:
	I0802 18:19:09.607127   41488 command_runner.go:130] > #
	I0802 18:19:09.607137   41488 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0802 18:19:09.607150   41488 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0802 18:19:09.607157   41488 command_runner.go:130] > #
	I0802 18:19:09.607167   41488 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0802 18:19:09.607180   41488 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0802 18:19:09.607188   41488 command_runner.go:130] > #
	I0802 18:19:09.607198   41488 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0802 18:19:09.607207   41488 command_runner.go:130] > # feature.
	I0802 18:19:09.607215   41488 command_runner.go:130] > #
	I0802 18:19:09.607226   41488 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0802 18:19:09.607234   41488 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0802 18:19:09.607247   41488 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0802 18:19:09.607260   41488 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0802 18:19:09.607271   41488 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0802 18:19:09.607279   41488 command_runner.go:130] > #
	I0802 18:19:09.607288   41488 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0802 18:19:09.607301   41488 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0802 18:19:09.607308   41488 command_runner.go:130] > #
	I0802 18:19:09.607313   41488 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0802 18:19:09.607323   41488 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0802 18:19:09.607331   41488 command_runner.go:130] > #
	I0802 18:19:09.607341   41488 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0802 18:19:09.607353   41488 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0802 18:19:09.607361   41488 command_runner.go:130] > # limitation.
	I0802 18:19:09.607368   41488 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0802 18:19:09.607377   41488 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0802 18:19:09.607386   41488 command_runner.go:130] > runtime_type = "oci"
	I0802 18:19:09.607394   41488 command_runner.go:130] > runtime_root = "/run/runc"
	I0802 18:19:09.607398   41488 command_runner.go:130] > runtime_config_path = ""
	I0802 18:19:09.607406   41488 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0802 18:19:09.607416   41488 command_runner.go:130] > monitor_cgroup = "pod"
	I0802 18:19:09.607433   41488 command_runner.go:130] > monitor_exec_cgroup = ""
	I0802 18:19:09.607442   41488 command_runner.go:130] > monitor_env = [
	I0802 18:19:09.607454   41488 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0802 18:19:09.607461   41488 command_runner.go:130] > ]
	I0802 18:19:09.607468   41488 command_runner.go:130] > privileged_without_host_devices = false
	I0802 18:19:09.607480   41488 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0802 18:19:09.607487   41488 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0802 18:19:09.607499   41488 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0802 18:19:09.607515   41488 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0802 18:19:09.607529   41488 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0802 18:19:09.607540   41488 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0802 18:19:09.607557   41488 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0802 18:19:09.607568   41488 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0802 18:19:09.607579   41488 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0802 18:19:09.607591   41488 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0802 18:19:09.607600   41488 command_runner.go:130] > # Example:
	I0802 18:19:09.607608   41488 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0802 18:19:09.607615   41488 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0802 18:19:09.607622   41488 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0802 18:19:09.607634   41488 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0802 18:19:09.607639   41488 command_runner.go:130] > # cpuset = 0
	I0802 18:19:09.607645   41488 command_runner.go:130] > # cpushares = "0-1"
	I0802 18:19:09.607649   41488 command_runner.go:130] > # Where:
	I0802 18:19:09.607654   41488 command_runner.go:130] > # The workload name is workload-type.
	I0802 18:19:09.607662   41488 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0802 18:19:09.607671   41488 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0802 18:19:09.607680   41488 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0802 18:19:09.607691   41488 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0802 18:19:09.607700   41488 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0802 18:19:09.607708   41488 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0802 18:19:09.607717   41488 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0802 18:19:09.607782   41488 command_runner.go:130] > # Default value is set to true
	I0802 18:19:09.607843   41488 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0802 18:19:09.607858   41488 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0802 18:19:09.607873   41488 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0802 18:19:09.607882   41488 command_runner.go:130] > # Default value is set to 'false'
	I0802 18:19:09.607907   41488 command_runner.go:130] > # disable_hostport_mapping = false
	I0802 18:19:09.607921   41488 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0802 18:19:09.607929   41488 command_runner.go:130] > #
	I0802 18:19:09.607940   41488 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0802 18:19:09.607962   41488 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0802 18:19:09.607974   41488 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0802 18:19:09.607987   41488 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0802 18:19:09.608000   41488 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0802 18:19:09.608009   41488 command_runner.go:130] > [crio.image]
	I0802 18:19:09.608019   41488 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0802 18:19:09.608029   41488 command_runner.go:130] > # default_transport = "docker://"
	I0802 18:19:09.608041   41488 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0802 18:19:09.608053   41488 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0802 18:19:09.608060   41488 command_runner.go:130] > # global_auth_file = ""
	I0802 18:19:09.608067   41488 command_runner.go:130] > # The image used to instantiate infra containers.
	I0802 18:19:09.608078   41488 command_runner.go:130] > # This option supports live configuration reload.
	I0802 18:19:09.608090   41488 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0802 18:19:09.608103   41488 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0802 18:19:09.608115   41488 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0802 18:19:09.608126   41488 command_runner.go:130] > # This option supports live configuration reload.
	I0802 18:19:09.608135   41488 command_runner.go:130] > # pause_image_auth_file = ""
	I0802 18:19:09.608144   41488 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0802 18:19:09.608152   41488 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0802 18:19:09.608165   41488 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0802 18:19:09.608177   41488 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0802 18:19:09.608187   41488 command_runner.go:130] > # pause_command = "/pause"
	I0802 18:19:09.608199   41488 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0802 18:19:09.608210   41488 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0802 18:19:09.608222   41488 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0802 18:19:09.608230   41488 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0802 18:19:09.608242   41488 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0802 18:19:09.608256   41488 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0802 18:19:09.608265   41488 command_runner.go:130] > # pinned_images = [
	I0802 18:19:09.608273   41488 command_runner.go:130] > # ]
	I0802 18:19:09.608282   41488 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0802 18:19:09.608295   41488 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0802 18:19:09.608312   41488 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0802 18:19:09.608324   41488 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0802 18:19:09.608335   41488 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0802 18:19:09.608358   41488 command_runner.go:130] > # signature_policy = ""
	I0802 18:19:09.608369   41488 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0802 18:19:09.608387   41488 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0802 18:19:09.608397   41488 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0802 18:19:09.608407   41488 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0802 18:19:09.608419   41488 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0802 18:19:09.608431   41488 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0802 18:19:09.608443   41488 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0802 18:19:09.608460   41488 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0802 18:19:09.608469   41488 command_runner.go:130] > # changing them here.
	I0802 18:19:09.608478   41488 command_runner.go:130] > # insecure_registries = [
	I0802 18:19:09.608484   41488 command_runner.go:130] > # ]
	I0802 18:19:09.608493   41488 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0802 18:19:09.608505   41488 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0802 18:19:09.608514   41488 command_runner.go:130] > # image_volumes = "mkdir"
	I0802 18:19:09.608526   41488 command_runner.go:130] > # Temporary directory to use for storing big files
	I0802 18:19:09.608537   41488 command_runner.go:130] > # big_files_temporary_dir = ""
	I0802 18:19:09.608549   41488 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0802 18:19:09.608558   41488 command_runner.go:130] > # CNI plugins.
	I0802 18:19:09.608565   41488 command_runner.go:130] > [crio.network]
	I0802 18:19:09.608571   41488 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0802 18:19:09.608582   41488 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0802 18:19:09.608592   41488 command_runner.go:130] > # cni_default_network = ""
	I0802 18:19:09.608603   41488 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0802 18:19:09.608614   41488 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0802 18:19:09.608626   41488 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0802 18:19:09.608634   41488 command_runner.go:130] > # plugin_dirs = [
	I0802 18:19:09.608643   41488 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0802 18:19:09.608651   41488 command_runner.go:130] > # ]
	I0802 18:19:09.608660   41488 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0802 18:19:09.608664   41488 command_runner.go:130] > [crio.metrics]
	I0802 18:19:09.608672   41488 command_runner.go:130] > # Globally enable or disable metrics support.
	I0802 18:19:09.608682   41488 command_runner.go:130] > enable_metrics = true
	I0802 18:19:09.608695   41488 command_runner.go:130] > # Specify enabled metrics collectors.
	I0802 18:19:09.608705   41488 command_runner.go:130] > # Per default all metrics are enabled.
	I0802 18:19:09.608719   41488 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0802 18:19:09.608731   41488 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0802 18:19:09.608742   41488 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0802 18:19:09.608749   41488 command_runner.go:130] > # metrics_collectors = [
	I0802 18:19:09.608753   41488 command_runner.go:130] > # 	"operations",
	I0802 18:19:09.608761   41488 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0802 18:19:09.608771   41488 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0802 18:19:09.608781   41488 command_runner.go:130] > # 	"operations_errors",
	I0802 18:19:09.608790   41488 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0802 18:19:09.608801   41488 command_runner.go:130] > # 	"image_pulls_by_name",
	I0802 18:19:09.608810   41488 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0802 18:19:09.608820   41488 command_runner.go:130] > # 	"image_pulls_failures",
	I0802 18:19:09.608827   41488 command_runner.go:130] > # 	"image_pulls_successes",
	I0802 18:19:09.608833   41488 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0802 18:19:09.608838   41488 command_runner.go:130] > # 	"image_layer_reuse",
	I0802 18:19:09.608848   41488 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0802 18:19:09.608858   41488 command_runner.go:130] > # 	"containers_oom_total",
	I0802 18:19:09.608867   41488 command_runner.go:130] > # 	"containers_oom",
	I0802 18:19:09.608877   41488 command_runner.go:130] > # 	"processes_defunct",
	I0802 18:19:09.608886   41488 command_runner.go:130] > # 	"operations_total",
	I0802 18:19:09.608896   41488 command_runner.go:130] > # 	"operations_latency_seconds",
	I0802 18:19:09.608906   41488 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0802 18:19:09.608914   41488 command_runner.go:130] > # 	"operations_errors_total",
	I0802 18:19:09.608919   41488 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0802 18:19:09.608927   41488 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0802 18:19:09.608934   41488 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0802 18:19:09.608944   41488 command_runner.go:130] > # 	"image_pulls_success_total",
	I0802 18:19:09.608954   41488 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0802 18:19:09.608964   41488 command_runner.go:130] > # 	"containers_oom_count_total",
	I0802 18:19:09.608975   41488 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0802 18:19:09.608985   41488 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0802 18:19:09.608992   41488 command_runner.go:130] > # ]
	I0802 18:19:09.609001   41488 command_runner.go:130] > # The port on which the metrics server will listen.
	I0802 18:19:09.609006   41488 command_runner.go:130] > # metrics_port = 9090
	I0802 18:19:09.609014   41488 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0802 18:19:09.609024   41488 command_runner.go:130] > # metrics_socket = ""
	I0802 18:19:09.609035   41488 command_runner.go:130] > # The certificate for the secure metrics server.
	I0802 18:19:09.609048   41488 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0802 18:19:09.609065   41488 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0802 18:19:09.609075   41488 command_runner.go:130] > # certificate on any modification event.
	I0802 18:19:09.609083   41488 command_runner.go:130] > # metrics_cert = ""
	I0802 18:19:09.609088   41488 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0802 18:19:09.609096   41488 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0802 18:19:09.609105   41488 command_runner.go:130] > # metrics_key = ""
	I0802 18:19:09.609118   41488 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0802 18:19:09.609127   41488 command_runner.go:130] > [crio.tracing]
	I0802 18:19:09.609138   41488 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0802 18:19:09.609148   41488 command_runner.go:130] > # enable_tracing = false
	I0802 18:19:09.609160   41488 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0802 18:19:09.609168   41488 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0802 18:19:09.609174   41488 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0802 18:19:09.609186   41488 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0802 18:19:09.609196   41488 command_runner.go:130] > # CRI-O NRI configuration.
	I0802 18:19:09.609204   41488 command_runner.go:130] > [crio.nri]
	I0802 18:19:09.609214   41488 command_runner.go:130] > # Globally enable or disable NRI.
	I0802 18:19:09.609223   41488 command_runner.go:130] > # enable_nri = false
	I0802 18:19:09.609230   41488 command_runner.go:130] > # NRI socket to listen on.
	I0802 18:19:09.609237   41488 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0802 18:19:09.609246   41488 command_runner.go:130] > # NRI plugin directory to use.
	I0802 18:19:09.609255   41488 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0802 18:19:09.609260   41488 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0802 18:19:09.609270   41488 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0802 18:19:09.609281   41488 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0802 18:19:09.609292   41488 command_runner.go:130] > # nri_disable_connections = false
	I0802 18:19:09.609304   41488 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0802 18:19:09.609314   41488 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0802 18:19:09.609325   41488 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0802 18:19:09.609334   41488 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0802 18:19:09.609347   41488 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0802 18:19:09.609353   41488 command_runner.go:130] > [crio.stats]
	I0802 18:19:09.609359   41488 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0802 18:19:09.609366   41488 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0802 18:19:09.609373   41488 command_runner.go:130] > # stats_collection_period = 0
	I0802 18:19:09.609416   41488 command_runner.go:130] ! time="2024-08-02 18:19:09.549271871Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0802 18:19:09.609437   41488 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0802 18:19:09.609566   41488 cni.go:84] Creating CNI manager for ""
	I0802 18:19:09.609583   41488 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0802 18:19:09.609595   41488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:19:09.609615   41488 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-250383 NodeName:multinode-250383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 18:19:09.609746   41488 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-250383"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:19:09.609813   41488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 18:19:09.619540   41488 command_runner.go:130] > kubeadm
	I0802 18:19:09.619558   41488 command_runner.go:130] > kubectl
	I0802 18:19:09.619563   41488 command_runner.go:130] > kubelet
	I0802 18:19:09.619578   41488 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:19:09.619627   41488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:19:09.628735   41488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0802 18:19:09.644275   41488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:19:09.659415   41488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0802 18:19:09.676196   41488 ssh_runner.go:195] Run: grep 192.168.39.67	control-plane.minikube.internal$ /etc/hosts
	I0802 18:19:09.679804   41488 command_runner.go:130] > 192.168.39.67	control-plane.minikube.internal
	I0802 18:19:09.680094   41488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:19:09.815507   41488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:19:09.829363   41488 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383 for IP: 192.168.39.67
	I0802 18:19:09.829393   41488 certs.go:194] generating shared ca certs ...
	I0802 18:19:09.829409   41488 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:19:09.829569   41488 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:19:09.829606   41488 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:19:09.829615   41488 certs.go:256] generating profile certs ...
	I0802 18:19:09.829698   41488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/client.key
	I0802 18:19:09.829781   41488 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/apiserver.key.1086a566
	I0802 18:19:09.829828   41488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/proxy-client.key
	I0802 18:19:09.829839   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0802 18:19:09.829850   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0802 18:19:09.829861   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0802 18:19:09.829874   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0802 18:19:09.829884   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0802 18:19:09.829899   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0802 18:19:09.829910   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0802 18:19:09.829920   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0802 18:19:09.829975   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:19:09.830003   41488 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:19:09.830011   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:19:09.830035   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:19:09.830059   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:19:09.830080   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:19:09.830141   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:19:09.830169   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem -> /usr/share/ca-certificates/12547.pem
	I0802 18:19:09.830182   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /usr/share/ca-certificates/125472.pem
	I0802 18:19:09.830195   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:19:09.830749   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:19:09.853563   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:19:09.875798   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:19:09.898398   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:19:09.921220   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0802 18:19:09.943138   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 18:19:09.965160   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:19:09.986964   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 18:19:10.008699   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:19:10.030329   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:19:10.052571   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:19:10.074632   41488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:19:10.089579   41488 ssh_runner.go:195] Run: openssl version
	I0802 18:19:10.094917   41488 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0802 18:19:10.095056   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:19:10.106360   41488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:19:10.110360   41488 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:19:10.110498   41488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:19:10.110597   41488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:19:10.115911   41488 command_runner.go:130] > b5213941
	I0802 18:19:10.115978   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:19:10.126009   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:19:10.137826   41488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:19:10.141890   41488 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:19:10.141920   41488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:19:10.141968   41488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:19:10.147077   41488 command_runner.go:130] > 51391683
	I0802 18:19:10.147155   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:19:10.155881   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:19:10.165723   41488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:19:10.169878   41488 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:19:10.170116   41488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:19:10.170166   41488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:19:10.175194   41488 command_runner.go:130] > 3ec20f2e
	I0802 18:19:10.175268   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:19:10.184192   41488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:19:10.188292   41488 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:19:10.188311   41488 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0802 18:19:10.188316   41488 command_runner.go:130] > Device: 253,1	Inode: 1056811     Links: 1
	I0802 18:19:10.188323   41488 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0802 18:19:10.188331   41488 command_runner.go:130] > Access: 2024-08-02 18:12:09.250230645 +0000
	I0802 18:19:10.188336   41488 command_runner.go:130] > Modify: 2024-08-02 18:12:09.250230645 +0000
	I0802 18:19:10.188343   41488 command_runner.go:130] > Change: 2024-08-02 18:12:09.250230645 +0000
	I0802 18:19:10.188350   41488 command_runner.go:130] >  Birth: 2024-08-02 18:12:09.250230645 +0000
	I0802 18:19:10.188400   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 18:19:10.193451   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.193589   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 18:19:10.198724   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.198774   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 18:19:10.203701   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.203864   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 18:19:10.209040   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.209158   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 18:19:10.214779   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.214823   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 18:19:10.219933   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.220202   41488 kubeadm.go:392] StartCluster: {Name:multinode-250383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-250383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:19:10.220298   41488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:19:10.220369   41488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:19:10.255092   41488 command_runner.go:130] > 9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5
	I0802 18:19:10.255141   41488 command_runner.go:130] > 2a11e0f9813bcbd6e4131452b86d24f9202bbf7bce3a8f936eeed2294fedeb9c
	I0802 18:19:10.255151   41488 command_runner.go:130] > 595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907
	I0802 18:19:10.255161   41488 command_runner.go:130] > b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79
	I0802 18:19:10.255170   41488 command_runner.go:130] > 4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee
	I0802 18:19:10.255176   41488 command_runner.go:130] > bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91
	I0802 18:19:10.255181   41488 command_runner.go:130] > 995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a
	I0802 18:19:10.255198   41488 command_runner.go:130] > 98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b
	I0802 18:19:10.255210   41488 command_runner.go:130] > e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd
	I0802 18:19:10.255237   41488 cri.go:89] found id: "9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5"
	I0802 18:19:10.255250   41488 cri.go:89] found id: "2a11e0f9813bcbd6e4131452b86d24f9202bbf7bce3a8f936eeed2294fedeb9c"
	I0802 18:19:10.255255   41488 cri.go:89] found id: "595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907"
	I0802 18:19:10.255260   41488 cri.go:89] found id: "b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79"
	I0802 18:19:10.255267   41488 cri.go:89] found id: "4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee"
	I0802 18:19:10.255272   41488 cri.go:89] found id: "bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91"
	I0802 18:19:10.255276   41488 cri.go:89] found id: "995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a"
	I0802 18:19:10.255282   41488 cri.go:89] found id: "98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b"
	I0802 18:19:10.255285   41488 cri.go:89] found id: "e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd"
	I0802 18:19:10.255292   41488 cri.go:89] found id: ""
	I0802 18:19:10.255344   41488 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.563099768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722622857563075813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a73688e-90cf-4c70-a7a2-14ef833cdd42 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.563748832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91e47ecc-a81b-4a81-a526-2433a2aa47f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.563803855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91e47ecc-a81b-4a81-a526-2433a2aa47f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.564146460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f825d06a22a7497d863561bd27b24d21c155e3a124e0af0dfd33603c28804657,PodSandboxId:835e9f0282b33c8f52be2dcdafea6357b48e992c25b83d1cb06f383fb28d9b36,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722622790017107154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939adbf73795bd9c2b2c0a4641f696c845801352849cede09ab386e4bb05cc,PodSandboxId:db7b4c3cee33edb87f1a23b3e1d154e27db48ff95b8fe8345f32781beaedff9b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722622756416280084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26eebc9ebbff1976ff7d1e06136733e5480d90e28bfe93063a2e4a07ca42988f,PodSandboxId:c9939974839ae48b8443bc5a771f071aa4edfff5c19b7917d2547c87ca79b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722622756392925019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb2a64ca9fbd58114c40aa07ba1e6fd707f64160e285c92e3044db332a91562,PodSandboxId:aad616f4031406d4fe2399ad3b7c6d7e85877f023be155196c30f1f20b42366c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722622756364688720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beab6760bae27ed786434fe87ddd0db2a2b31ec1f142098ff4e0591d217b033c,PodSandboxId:62a3d90a23f3ba992c72653fbe24b4c543c204238cf17bd81ed10965a7ee9c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722622756290518145,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceac5df534eaa7f8cee9a49da8430b46c9228e0609dede1e2d195b1a6234af6,PodSandboxId:6bc745953bc49e54448e26cf949a38c489eee74b854b6178fe7ec2d9a158cb18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722622752462987897,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbb8f6618e46a91e9ff90c4351c77c97371f0b25a3189239891e0b0777810d7,PodSandboxId:ca6032e84f4e1dc0fdc49b2a11be1c9f132e1cd422dc7f825d86b0b9f5510577,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722622752484835259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1225b7f1b1c1f6b63bb479e019756883806da897058b865c00bb76257a5f4b6f,PodSandboxId:c45c22fcbbaeadef4286655e041db9d66af9d094405f3acedd73090d23b6909f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722622752478763068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5c6261a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2e98aabffd740ba129f2df09f3383baf6f2135ff8bf660d0af74a6a08e7aa9,PodSandboxId:f316c337fc45552bd2c66d758e91d2b0ded8f47d7c7e880171779ba77614b485,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722622752422321142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotations:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5,PodSandboxId:01171b0fa1c4615d234526b92702f2192ccdf252a3fb8fb35ff274c960dc7dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722622738915254445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080296105c460adefe61e5eb38ac79a48fa159d76ec689ef1e2e991d54b8daa4,PodSandboxId:e9f1315c6d6031ca77ef47faef093111cc8f6b7232f145e132cd39f2888a59d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722622420917421352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907,PodSandboxId:aca3fcdb5ef7d0f65f30a18d57db8828bf02b49801ea77e57780b88b7969f3dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722622367670751062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79,PodSandboxId:3a0fc305ccb27f8de61466e9095e179073cc71810ad3b67d08d36a4735e03c0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722622355734818866,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee,PodSandboxId:8acb9191287bb74c85245ae5dd4020f348c043b48f779d174b149327f42ac1cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722622352092691925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91,PodSandboxId:34c16d3eea7b3cd4362b3047c069a573c9a4d5df466ecd8216730bb0dc1e4978,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722622332401646173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a,PodSandboxId:493457b81a9b33bb2f456335a803dc0a849d461f7985091cd5de0e403999e4d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722622332392128969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotation
s:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b,PodSandboxId:e79a3a9f456f791d78cdae09e3969abefaf7dd434d0b764ec3b94af04419be51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722622332340723312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd,PodSandboxId:45ee7a236c1aa73dd926a6dc514ff2ecf91fe25923cc2978dcde448c7c12ec1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722622332340656756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91e47ecc-a81b-4a81-a526-2433a2aa47f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.606325907Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6574bea-2fbd-44b2-80ca-1a4e3a4b223a name=/runtime.v1.RuntimeService/Version
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.606415579Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6574bea-2fbd-44b2-80ca-1a4e3a4b223a name=/runtime.v1.RuntimeService/Version
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.607679584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a41ec722-7b98-4139-b298-c01efade0e88 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.608087116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722622857608066150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a41ec722-7b98-4139-b298-c01efade0e88 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.608658181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5704259d-4237-4265-a44c-f554993c96dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.608725558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5704259d-4237-4265-a44c-f554993c96dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.609044874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f825d06a22a7497d863561bd27b24d21c155e3a124e0af0dfd33603c28804657,PodSandboxId:835e9f0282b33c8f52be2dcdafea6357b48e992c25b83d1cb06f383fb28d9b36,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722622790017107154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939adbf73795bd9c2b2c0a4641f696c845801352849cede09ab386e4bb05cc,PodSandboxId:db7b4c3cee33edb87f1a23b3e1d154e27db48ff95b8fe8345f32781beaedff9b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722622756416280084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26eebc9ebbff1976ff7d1e06136733e5480d90e28bfe93063a2e4a07ca42988f,PodSandboxId:c9939974839ae48b8443bc5a771f071aa4edfff5c19b7917d2547c87ca79b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722622756392925019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb2a64ca9fbd58114c40aa07ba1e6fd707f64160e285c92e3044db332a91562,PodSandboxId:aad616f4031406d4fe2399ad3b7c6d7e85877f023be155196c30f1f20b42366c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722622756364688720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beab6760bae27ed786434fe87ddd0db2a2b31ec1f142098ff4e0591d217b033c,PodSandboxId:62a3d90a23f3ba992c72653fbe24b4c543c204238cf17bd81ed10965a7ee9c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722622756290518145,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceac5df534eaa7f8cee9a49da8430b46c9228e0609dede1e2d195b1a6234af6,PodSandboxId:6bc745953bc49e54448e26cf949a38c489eee74b854b6178fe7ec2d9a158cb18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722622752462987897,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbb8f6618e46a91e9ff90c4351c77c97371f0b25a3189239891e0b0777810d7,PodSandboxId:ca6032e84f4e1dc0fdc49b2a11be1c9f132e1cd422dc7f825d86b0b9f5510577,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722622752484835259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1225b7f1b1c1f6b63bb479e019756883806da897058b865c00bb76257a5f4b6f,PodSandboxId:c45c22fcbbaeadef4286655e041db9d66af9d094405f3acedd73090d23b6909f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722622752478763068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5c6261a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2e98aabffd740ba129f2df09f3383baf6f2135ff8bf660d0af74a6a08e7aa9,PodSandboxId:f316c337fc45552bd2c66d758e91d2b0ded8f47d7c7e880171779ba77614b485,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722622752422321142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotations:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5,PodSandboxId:01171b0fa1c4615d234526b92702f2192ccdf252a3fb8fb35ff274c960dc7dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722622738915254445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080296105c460adefe61e5eb38ac79a48fa159d76ec689ef1e2e991d54b8daa4,PodSandboxId:e9f1315c6d6031ca77ef47faef093111cc8f6b7232f145e132cd39f2888a59d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722622420917421352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907,PodSandboxId:aca3fcdb5ef7d0f65f30a18d57db8828bf02b49801ea77e57780b88b7969f3dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722622367670751062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79,PodSandboxId:3a0fc305ccb27f8de61466e9095e179073cc71810ad3b67d08d36a4735e03c0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722622355734818866,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee,PodSandboxId:8acb9191287bb74c85245ae5dd4020f348c043b48f779d174b149327f42ac1cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722622352092691925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91,PodSandboxId:34c16d3eea7b3cd4362b3047c069a573c9a4d5df466ecd8216730bb0dc1e4978,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722622332401646173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a,PodSandboxId:493457b81a9b33bb2f456335a803dc0a849d461f7985091cd5de0e403999e4d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722622332392128969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotation
s:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b,PodSandboxId:e79a3a9f456f791d78cdae09e3969abefaf7dd434d0b764ec3b94af04419be51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722622332340723312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd,PodSandboxId:45ee7a236c1aa73dd926a6dc514ff2ecf91fe25923cc2978dcde448c7c12ec1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722622332340656756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5704259d-4237-4265-a44c-f554993c96dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.648072644Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9686a029-4727-4819-9f98-c3f05ad41845 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.648187016Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9686a029-4727-4819-9f98-c3f05ad41845 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.649538011Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5bf60d6b-051d-4ffe-b00f-a8d356c77a14 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.649997229Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722622857649965214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bf60d6b-051d-4ffe-b00f-a8d356c77a14 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.650644418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bda55098-84ab-43e1-884c-cf8ce1b7131d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.650710057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bda55098-84ab-43e1-884c-cf8ce1b7131d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.651077918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f825d06a22a7497d863561bd27b24d21c155e3a124e0af0dfd33603c28804657,PodSandboxId:835e9f0282b33c8f52be2dcdafea6357b48e992c25b83d1cb06f383fb28d9b36,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722622790017107154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939adbf73795bd9c2b2c0a4641f696c845801352849cede09ab386e4bb05cc,PodSandboxId:db7b4c3cee33edb87f1a23b3e1d154e27db48ff95b8fe8345f32781beaedff9b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722622756416280084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26eebc9ebbff1976ff7d1e06136733e5480d90e28bfe93063a2e4a07ca42988f,PodSandboxId:c9939974839ae48b8443bc5a771f071aa4edfff5c19b7917d2547c87ca79b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722622756392925019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb2a64ca9fbd58114c40aa07ba1e6fd707f64160e285c92e3044db332a91562,PodSandboxId:aad616f4031406d4fe2399ad3b7c6d7e85877f023be155196c30f1f20b42366c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722622756364688720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beab6760bae27ed786434fe87ddd0db2a2b31ec1f142098ff4e0591d217b033c,PodSandboxId:62a3d90a23f3ba992c72653fbe24b4c543c204238cf17bd81ed10965a7ee9c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722622756290518145,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceac5df534eaa7f8cee9a49da8430b46c9228e0609dede1e2d195b1a6234af6,PodSandboxId:6bc745953bc49e54448e26cf949a38c489eee74b854b6178fe7ec2d9a158cb18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722622752462987897,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbb8f6618e46a91e9ff90c4351c77c97371f0b25a3189239891e0b0777810d7,PodSandboxId:ca6032e84f4e1dc0fdc49b2a11be1c9f132e1cd422dc7f825d86b0b9f5510577,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722622752484835259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1225b7f1b1c1f6b63bb479e019756883806da897058b865c00bb76257a5f4b6f,PodSandboxId:c45c22fcbbaeadef4286655e041db9d66af9d094405f3acedd73090d23b6909f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722622752478763068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5c6261a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2e98aabffd740ba129f2df09f3383baf6f2135ff8bf660d0af74a6a08e7aa9,PodSandboxId:f316c337fc45552bd2c66d758e91d2b0ded8f47d7c7e880171779ba77614b485,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722622752422321142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotations:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5,PodSandboxId:01171b0fa1c4615d234526b92702f2192ccdf252a3fb8fb35ff274c960dc7dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722622738915254445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080296105c460adefe61e5eb38ac79a48fa159d76ec689ef1e2e991d54b8daa4,PodSandboxId:e9f1315c6d6031ca77ef47faef093111cc8f6b7232f145e132cd39f2888a59d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722622420917421352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907,PodSandboxId:aca3fcdb5ef7d0f65f30a18d57db8828bf02b49801ea77e57780b88b7969f3dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722622367670751062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79,PodSandboxId:3a0fc305ccb27f8de61466e9095e179073cc71810ad3b67d08d36a4735e03c0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722622355734818866,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee,PodSandboxId:8acb9191287bb74c85245ae5dd4020f348c043b48f779d174b149327f42ac1cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722622352092691925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91,PodSandboxId:34c16d3eea7b3cd4362b3047c069a573c9a4d5df466ecd8216730bb0dc1e4978,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722622332401646173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a,PodSandboxId:493457b81a9b33bb2f456335a803dc0a849d461f7985091cd5de0e403999e4d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722622332392128969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotation
s:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b,PodSandboxId:e79a3a9f456f791d78cdae09e3969abefaf7dd434d0b764ec3b94af04419be51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722622332340723312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd,PodSandboxId:45ee7a236c1aa73dd926a6dc514ff2ecf91fe25923cc2978dcde448c7c12ec1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722622332340656756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bda55098-84ab-43e1-884c-cf8ce1b7131d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.692742072Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a7d612b-5e4a-49ad-b03c-7fa25e9f8122 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.692966563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a7d612b-5e4a-49ad-b03c-7fa25e9f8122 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.694143475Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c32a10b0-4b09-4b31-a98a-781b2f8a95dc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.694735216Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722622857694708510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c32a10b0-4b09-4b31-a98a-781b2f8a95dc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.695309569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c11ae4f8-6b3e-43a4-a4b3-0fa2629dc861 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.695371358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c11ae4f8-6b3e-43a4-a4b3-0fa2629dc861 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:20:57 multinode-250383 crio[2961]: time="2024-08-02 18:20:57.695906854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f825d06a22a7497d863561bd27b24d21c155e3a124e0af0dfd33603c28804657,PodSandboxId:835e9f0282b33c8f52be2dcdafea6357b48e992c25b83d1cb06f383fb28d9b36,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722622790017107154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939adbf73795bd9c2b2c0a4641f696c845801352849cede09ab386e4bb05cc,PodSandboxId:db7b4c3cee33edb87f1a23b3e1d154e27db48ff95b8fe8345f32781beaedff9b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722622756416280084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26eebc9ebbff1976ff7d1e06136733e5480d90e28bfe93063a2e4a07ca42988f,PodSandboxId:c9939974839ae48b8443bc5a771f071aa4edfff5c19b7917d2547c87ca79b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722622756392925019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb2a64ca9fbd58114c40aa07ba1e6fd707f64160e285c92e3044db332a91562,PodSandboxId:aad616f4031406d4fe2399ad3b7c6d7e85877f023be155196c30f1f20b42366c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722622756364688720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beab6760bae27ed786434fe87ddd0db2a2b31ec1f142098ff4e0591d217b033c,PodSandboxId:62a3d90a23f3ba992c72653fbe24b4c543c204238cf17bd81ed10965a7ee9c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722622756290518145,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceac5df534eaa7f8cee9a49da8430b46c9228e0609dede1e2d195b1a6234af6,PodSandboxId:6bc745953bc49e54448e26cf949a38c489eee74b854b6178fe7ec2d9a158cb18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722622752462987897,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbb8f6618e46a91e9ff90c4351c77c97371f0b25a3189239891e0b0777810d7,PodSandboxId:ca6032e84f4e1dc0fdc49b2a11be1c9f132e1cd422dc7f825d86b0b9f5510577,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722622752484835259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1225b7f1b1c1f6b63bb479e019756883806da897058b865c00bb76257a5f4b6f,PodSandboxId:c45c22fcbbaeadef4286655e041db9d66af9d094405f3acedd73090d23b6909f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722622752478763068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5c6261a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2e98aabffd740ba129f2df09f3383baf6f2135ff8bf660d0af74a6a08e7aa9,PodSandboxId:f316c337fc45552bd2c66d758e91d2b0ded8f47d7c7e880171779ba77614b485,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722622752422321142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotations:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5,PodSandboxId:01171b0fa1c4615d234526b92702f2192ccdf252a3fb8fb35ff274c960dc7dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722622738915254445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080296105c460adefe61e5eb38ac79a48fa159d76ec689ef1e2e991d54b8daa4,PodSandboxId:e9f1315c6d6031ca77ef47faef093111cc8f6b7232f145e132cd39f2888a59d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722622420917421352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907,PodSandboxId:aca3fcdb5ef7d0f65f30a18d57db8828bf02b49801ea77e57780b88b7969f3dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722622367670751062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79,PodSandboxId:3a0fc305ccb27f8de61466e9095e179073cc71810ad3b67d08d36a4735e03c0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722622355734818866,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee,PodSandboxId:8acb9191287bb74c85245ae5dd4020f348c043b48f779d174b149327f42ac1cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722622352092691925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91,PodSandboxId:34c16d3eea7b3cd4362b3047c069a573c9a4d5df466ecd8216730bb0dc1e4978,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722622332401646173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a,PodSandboxId:493457b81a9b33bb2f456335a803dc0a849d461f7985091cd5de0e403999e4d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722622332392128969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotation
s:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b,PodSandboxId:e79a3a9f456f791d78cdae09e3969abefaf7dd434d0b764ec3b94af04419be51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722622332340723312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd,PodSandboxId:45ee7a236c1aa73dd926a6dc514ff2ecf91fe25923cc2978dcde448c7c12ec1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722622332340656756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c11ae4f8-6b3e-43a4-a4b3-0fa2629dc861 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f825d06a22a74       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   835e9f0282b33       busybox-fc5497c4f-6vqf8
	2c939adbf7379       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   db7b4c3cee33e       kindnet-k47qb
	26eebc9ebbff1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   c9939974839ae       kube-proxy-sjq5b
	fbb2a64ca9fbd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   aad616f403140       storage-provisioner
	beab6760bae27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   62a3d90a23f3b       coredns-7db6d8ff4d-p22xc
	5cbb8f6618e46       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   ca6032e84f4e1       kube-scheduler-multinode-250383
	1225b7f1b1c1f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   c45c22fcbbaea       kube-apiserver-multinode-250383
	aceac5df534ea       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   6bc745953bc49       kube-controller-manager-multinode-250383
	2d2e98aabffd7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   f316c337fc455       etcd-multinode-250383
	9ac19b826e084       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   01171b0fa1c46       coredns-7db6d8ff4d-p22xc
	080296105c460       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   e9f1315c6d603       busybox-fc5497c4f-6vqf8
	595e69fd3041a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   aca3fcdb5ef7d       storage-provisioner
	b117f7898e49b       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   3a0fc305ccb27       kindnet-k47qb
	4ad8d7e314b1e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   8acb9191287bb       kube-proxy-sjq5b
	bfcb3f51365d2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   34c16d3eea7b3       kube-scheduler-multinode-250383
	995dfd5bd7840       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   493457b81a9b3       etcd-multinode-250383
	98da8355877a7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   e79a3a9f456f7       kube-controller-manager-multinode-250383
	e1c10cb7907ec       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   45ee7a236c1aa       kube-apiserver-multinode-250383
	
	
	==> coredns [9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49568 - 7292 "HINFO IN 6251548447806641683.3005424376035411823. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011124339s
	
	
	==> coredns [beab6760bae27ed786434fe87ddd0db2a2b31ec1f142098ff4e0591d217b033c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38561 - 32959 "HINFO IN 5626239399824007099.1786114741129606773. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011224542s
	
	
	==> describe nodes <==
	Name:               multinode-250383
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-250383
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=multinode-250383
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T18_12_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:12:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-250383
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:20:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 18:19:15 +0000   Fri, 02 Aug 2024 18:12:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 18:19:15 +0000   Fri, 02 Aug 2024 18:12:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 18:19:15 +0000   Fri, 02 Aug 2024 18:12:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 18:19:15 +0000   Fri, 02 Aug 2024 18:12:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    multinode-250383
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 931f9cb08c51491586b0f1037696dd39
	  System UUID:                931f9cb0-8c51-4915-86b0-f1037696dd39
	  Boot ID:                    f9a248e1-f9c4-46a4-85cf-fb7d585f9911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6vqf8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 coredns-7db6d8ff4d-p22xc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m26s
	  kube-system                 etcd-multinode-250383                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m40s
	  kube-system                 kindnet-k47qb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m26s
	  kube-system                 kube-apiserver-multinode-250383             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 kube-controller-manager-multinode-250383    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 kube-proxy-sjq5b                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 kube-scheduler-multinode-250383             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m25s                kube-proxy       
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  NodeHasSufficientPID     8m40s                kubelet          Node multinode-250383 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m40s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m40s                kubelet          Node multinode-250383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m40s                kubelet          Node multinode-250383 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m40s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m26s                node-controller  Node multinode-250383 event: Registered Node multinode-250383 in Controller
	  Normal  NodeReady                8m10s                kubelet          Node multinode-250383 status is now: NodeReady
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s (x8 over 106s)  kubelet          Node multinode-250383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 106s)  kubelet          Node multinode-250383 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 106s)  kubelet          Node multinode-250383 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           90s                  node-controller  Node multinode-250383 event: Registered Node multinode-250383 in Controller
	
	
	Name:               multinode-250383-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-250383-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=multinode-250383
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T18_19_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:19:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-250383-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:20:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 18:20:27 +0000   Fri, 02 Aug 2024 18:19:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 18:20:27 +0000   Fri, 02 Aug 2024 18:19:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 18:20:27 +0000   Fri, 02 Aug 2024 18:19:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 18:20:27 +0000   Fri, 02 Aug 2024 18:20:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    multinode-250383-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 917399c6f0634bfda38537db48c4baf3
	  System UUID:                917399c6-f063-4bfd-a385-37db48c4baf3
	  Boot ID:                    6104aa80-bea4-4d1f-90d3-c3fa75d62b95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hntjs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-xdnv2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m43s
	  kube-system                 kube-proxy-w4hmf           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m37s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m43s (x2 over 7m43s)  kubelet     Node multinode-250383-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m43s (x2 over 7m43s)  kubelet     Node multinode-250383-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m43s (x2 over 7m43s)  kubelet     Node multinode-250383-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m43s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m23s                  kubelet     Node multinode-250383-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  61s (x2 over 62s)      kubelet     Node multinode-250383-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 62s)      kubelet     Node multinode-250383-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 62s)      kubelet     Node multinode-250383-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                42s                    kubelet     Node multinode-250383-m02 status is now: NodeReady
	
	
	Name:               multinode-250383-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-250383-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=multinode-250383
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T18_20_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:20:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-250383-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:20:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 18:20:54 +0000   Fri, 02 Aug 2024 18:20:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 18:20:54 +0000   Fri, 02 Aug 2024 18:20:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 18:20:54 +0000   Fri, 02 Aug 2024 18:20:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 18:20:54 +0000   Fri, 02 Aug 2024 18:20:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    multinode-250383-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b11e4850be154558a0ccc671a64368d4
	  System UUID:                b11e4850-be15-4558-a0cc-c671a64368d4
	  Boot ID:                    80b10860-b84d-4870-9252-4aced2f90193
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fb7dl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m45s
	  kube-system                 kube-proxy-hnzvs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m39s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m51s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m46s (x2 over 6m46s)  kubelet     Node multinode-250383-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m46s (x2 over 6m46s)  kubelet     Node multinode-250383-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m46s (x2 over 6m46s)  kubelet     Node multinode-250383-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m45s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m25s                  kubelet     Node multinode-250383-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m56s (x2 over 5m56s)  kubelet     Node multinode-250383-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m56s (x2 over 5m56s)  kubelet     Node multinode-250383-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m56s (x2 over 5m56s)  kubelet     Node multinode-250383-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m37s                  kubelet     Node multinode-250383-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-250383-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-250383-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-250383-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-250383-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.053259] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.198867] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.127623] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.271889] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +3.953768] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +3.846237] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.057590] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.979885] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.102059] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.327660] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.826460] systemd-fstab-generator[1466]: Ignoring "noauto" option for root device
	[  +5.172099] kauditd_printk_skb: 58 callbacks suppressed
	[Aug 2 18:13] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 2 18:18] systemd-fstab-generator[2773]: Ignoring "noauto" option for root device
	[  +0.145531] systemd-fstab-generator[2785]: Ignoring "noauto" option for root device
	[  +0.171702] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.139256] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.390036] systemd-fstab-generator[2928]: Ignoring "noauto" option for root device
	[Aug 2 18:19] systemd-fstab-generator[3067]: Ignoring "noauto" option for root device
	[  +0.081906] kauditd_printk_skb: 110 callbacks suppressed
	[  +1.752381] systemd-fstab-generator[3191]: Ignoring "noauto" option for root device
	[  +4.690562] kauditd_printk_skb: 76 callbacks suppressed
	[ +11.911951] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.329741] systemd-fstab-generator[4041]: Ignoring "noauto" option for root device
	[ +17.500822] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2d2e98aabffd740ba129f2df09f3383baf6f2135ff8bf660d0af74a6a08e7aa9] <==
	{"level":"info","ts":"2024-08-02T18:19:12.823979Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-02T18:19:12.823988Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-02T18:19:12.824247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 switched to configuration voters=(929259593797349653)"}
	{"level":"info","ts":"2024-08-02T18:19:12.824319Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"429166af17098d53","local-member-id":"ce564ad586a3115","added-peer-id":"ce564ad586a3115","added-peer-peer-urls":["https://192.168.39.67:2380"]}
	{"level":"info","ts":"2024-08-02T18:19:12.824431Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"429166af17098d53","local-member-id":"ce564ad586a3115","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:19:12.824503Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:19:12.838838Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-02T18:19:12.839087Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ce564ad586a3115","initial-advertise-peer-urls":["https://192.168.39.67:2380"],"listen-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.67:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-02T18:19:12.83913Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-02T18:19:12.839241Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-08-02T18:19:12.839261Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-08-02T18:19:13.992573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-02T18:19:13.992623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-02T18:19:13.992663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 received MsgPreVoteResp from ce564ad586a3115 at term 2"}
	{"level":"info","ts":"2024-08-02T18:19:13.992678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became candidate at term 3"}
	{"level":"info","ts":"2024-08-02T18:19:13.992692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 received MsgVoteResp from ce564ad586a3115 at term 3"}
	{"level":"info","ts":"2024-08-02T18:19:13.9927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became leader at term 3"}
	{"level":"info","ts":"2024-08-02T18:19:13.99271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce564ad586a3115 elected leader ce564ad586a3115 at term 3"}
	{"level":"info","ts":"2024-08-02T18:19:14.002705Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ce564ad586a3115","local-member-attributes":"{Name:multinode-250383 ClientURLs:[https://192.168.39.67:2379]}","request-path":"/0/members/ce564ad586a3115/attributes","cluster-id":"429166af17098d53","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-02T18:19:14.002895Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:19:14.004514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:19:14.009499Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-02T18:19:14.00953Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-02T18:19:14.010844Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-02T18:19:14.011352Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.67:2379"}
	
	
	==> etcd [995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a] <==
	{"level":"info","ts":"2024-08-02T18:13:15.457374Z","caller":"traceutil/trace.go:171","msg":"trace[1943046641] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"151.811147ms","start":"2024-08-02T18:13:15.305552Z","end":"2024-08-02T18:13:15.457363Z","steps":["trace[1943046641] 'process raft request'  (duration: 151.2771ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:13:25.784905Z","caller":"traceutil/trace.go:171","msg":"trace[1248989928] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"192.6424ms","start":"2024-08-02T18:13:25.592236Z","end":"2024-08-02T18:13:25.784879Z","steps":["trace[1248989928] 'process raft request'  (duration: 192.492219ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:13:26.04872Z","caller":"traceutil/trace.go:171","msg":"trace[888830746] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:530; }","duration":"107.170707ms","start":"2024-08-02T18:13:25.941529Z","end":"2024-08-02T18:13:26.0487Z","steps":["trace[888830746] 'read index received'  (duration: 54.373979ms)","trace[888830746] 'applied index is now lower than readState.Index'  (duration: 52.795628ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:13:26.048912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.328428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T18:13:26.049005Z","caller":"traceutil/trace.go:171","msg":"trace[617738981] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:505; }","duration":"107.533589ms","start":"2024-08-02T18:13:25.941454Z","end":"2024-08-02T18:13:26.048988Z","steps":["trace[617738981] 'agreement among raft nodes before linearized reading'  (duration: 107.376602ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:13:26.049067Z","caller":"traceutil/trace.go:171","msg":"trace[1224528198] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"206.304026ms","start":"2024-08-02T18:13:25.842748Z","end":"2024-08-02T18:13:26.049052Z","steps":["trace[1224528198] 'process raft request'  (duration: 153.19601ms)","trace[1224528198] 'compare'  (duration: 52.642312ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T18:13:26.343957Z","caller":"traceutil/trace.go:171","msg":"trace[1351660476] linearizableReadLoop","detail":"{readStateIndex:532; appliedIndex:531; }","duration":"239.88476ms","start":"2024-08-02T18:13:26.104056Z","end":"2024-08-02T18:13:26.34394Z","steps":["trace[1351660476] 'read index received'  (duration: 182.025283ms)","trace[1351660476] 'applied index is now lower than readState.Index'  (duration: 57.858672ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T18:13:26.344098Z","caller":"traceutil/trace.go:171","msg":"trace[804152674] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"275.792401ms","start":"2024-08-02T18:13:26.068297Z","end":"2024-08-02T18:13:26.34409Z","steps":["trace[804152674] 'process raft request'  (duration: 217.827469ms)","trace[804152674] 'compare'  (duration: 57.750041ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:13:26.344294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.238081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T18:13:26.344348Z","caller":"traceutil/trace.go:171","msg":"trace[1019278426] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:506; }","duration":"240.316483ms","start":"2024-08-02T18:13:26.104021Z","end":"2024-08-02T18:13:26.344337Z","steps":["trace[1019278426] 'agreement among raft nodes before linearized reading'  (duration: 240.243881ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:14:13.081735Z","caller":"traceutil/trace.go:171","msg":"trace[2060008207] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"229.196911ms","start":"2024-08-02T18:14:12.852506Z","end":"2024-08-02T18:14:13.081703Z","steps":["trace[2060008207] 'process raft request'  (duration: 224.462043ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:14:13.081777Z","caller":"traceutil/trace.go:171","msg":"trace[1122326976] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"148.045451ms","start":"2024-08-02T18:14:12.933714Z","end":"2024-08-02T18:14:13.081759Z","steps":["trace[1122326976] 'process raft request'  (duration: 147.98921ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:14:13.081822Z","caller":"traceutil/trace.go:171","msg":"trace[466374812] linearizableReadLoop","detail":"{readStateIndex:628; appliedIndex:627; }","duration":"165.453332ms","start":"2024-08-02T18:14:12.916361Z","end":"2024-08-02T18:14:13.081814Z","steps":["trace[466374812] 'read index received'  (duration: 160.642107ms)","trace[466374812] 'applied index is now lower than readState.Index'  (duration: 4.810041ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:14:13.081986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.544692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T18:14:13.082836Z","caller":"traceutil/trace.go:171","msg":"trace[425686911] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:593; }","duration":"166.487424ms","start":"2024-08-02T18:14:12.916339Z","end":"2024-08-02T18:14:13.082826Z","steps":["trace[425686911] 'agreement among raft nodes before linearized reading'  (duration: 165.496941ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:17:26.950964Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-02T18:17:26.951082Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-250383","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	{"level":"warn","ts":"2024-08-02T18:17:26.951161Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:17:26.959204Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:17:26.996733Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:17:26.996819Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-02T18:17:26.996943Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ce564ad586a3115","current-leader-member-id":"ce564ad586a3115"}
	{"level":"info","ts":"2024-08-02T18:17:26.999688Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-08-02T18:17:26.999956Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-08-02T18:17:27.000021Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-250383","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	
	
	==> kernel <==
	 18:20:58 up 9 min,  0 users,  load average: 0.15, 0.18, 0.11
	Linux multinode-250383 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2c939adbf73795bd9c2b2c0a4641f696c845801352849cede09ab386e4bb05cc] <==
	I0802 18:20:17.456601       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:20:27.460749       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:20:27.460818       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:20:27.461044       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:20:27.461083       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:20:27.461196       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:20:27.461224       1 main.go:299] handling current node
	I0802 18:20:37.457151       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:20:37.457206       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:20:37.457345       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:20:37.457368       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.2.0/24] 
	I0802 18:20:37.457428       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:20:37.457448       1 main.go:299] handling current node
	I0802 18:20:47.457426       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:20:47.457578       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.2.0/24] 
	I0802 18:20:47.457752       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:20:47.457787       1 main.go:299] handling current node
	I0802 18:20:47.457814       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:20:47.457832       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:20:57.457612       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:20:57.457659       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.2.0/24] 
	I0802 18:20:57.457816       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:20:57.457860       1 main.go:299] handling current node
	I0802 18:20:57.457875       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:20:57.457880       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79] <==
	I0802 18:16:46.754163       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:16:56.756085       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:16:56.756138       1 main.go:299] handling current node
	I0802 18:16:56.756157       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:16:56.756162       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:16:56.756283       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:16:56.756302       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:17:06.754959       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:17:06.755088       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:17:06.755244       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:17:06.755271       1 main.go:299] handling current node
	I0802 18:17:06.755294       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:17:06.755310       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:17:16.758734       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:17:16.758852       1 main.go:299] handling current node
	I0802 18:17:16.758885       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:17:16.758908       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:17:16.759070       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:17:16.759105       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:17:26.761584       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:17:26.761627       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:17:26.761765       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:17:26.761783       1 main.go:299] handling current node
	I0802 18:17:26.761798       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:17:26.761812       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1225b7f1b1c1f6b63bb479e019756883806da897058b865c00bb76257a5f4b6f] <==
	I0802 18:19:15.659662       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0802 18:19:15.659802       1 policy_source.go:224] refreshing policies
	I0802 18:19:15.660915       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0802 18:19:15.665817       1 aggregator.go:165] initial CRD sync complete...
	I0802 18:19:15.665857       1 autoregister_controller.go:141] Starting autoregister controller
	I0802 18:19:15.665866       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0802 18:19:15.666586       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0802 18:19:15.669042       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 18:19:15.756132       1 shared_informer.go:320] Caches are synced for configmaps
	I0802 18:19:15.758206       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0802 18:19:15.759860       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 18:19:15.760108       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 18:19:15.760570       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0802 18:19:15.760596       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0802 18:19:15.765562       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0802 18:19:15.766754       1 cache.go:39] Caches are synced for autoregister controller
	E0802 18:19:15.772996       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0802 18:19:16.567720       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 18:19:17.292244       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0802 18:19:17.402424       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0802 18:19:17.417854       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0802 18:19:17.484879       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 18:19:17.493175       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 18:19:28.117732       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0802 18:19:28.217567       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd] <==
	E0802 18:14:43.523577       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0802 18:14:43.523709       1 timeout.go:142] post-timeout activity - time-elapsed: 2.44987ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-250383-m03" result: <nil>
	I0802 18:17:26.943442       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0802 18:17:26.954408       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0802 18:17:26.955019       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0802 18:17:26.955545       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0802 18:17:26.955602       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0802 18:17:26.955732       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0802 18:17:26.955755       1 controller.go:129] Ending legacy_token_tracking_controller
	I0802 18:17:26.955763       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0802 18:17:26.955789       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0802 18:17:26.955824       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0802 18:17:26.955842       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0802 18:17:26.955861       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0802 18:17:26.955886       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0802 18:17:26.955920       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0802 18:17:26.955946       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0802 18:17:26.955959       1 establishing_controller.go:87] Shutting down EstablishingController
	I0802 18:17:26.955985       1 naming_controller.go:302] Shutting down NamingConditionController
	I0802 18:17:26.955999       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0802 18:17:26.956018       1 controller.go:167] Shutting down OpenAPI controller
	I0802 18:17:26.956058       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0802 18:17:26.956082       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0802 18:17:26.956106       1 available_controller.go:439] Shutting down AvailableConditionController
	I0802 18:17:26.965026       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b] <==
	I0802 18:12:51.336650       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0802 18:13:15.459300       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-250383-m02\" does not exist"
	I0802 18:13:15.494795       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-250383-m02" podCIDRs=["10.244.1.0/24"]
	I0802 18:13:16.340106       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-250383-m02"
	I0802 18:13:35.666433       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:13:38.113793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.536671ms"
	I0802 18:13:38.144778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.932535ms"
	I0802 18:13:38.144881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.734µs"
	I0802 18:13:41.296100       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.841752ms"
	I0802 18:13:41.296338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.087µs"
	I0802 18:13:41.842981       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.085295ms"
	I0802 18:13:41.844164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.145µs"
	I0802 18:14:13.083925       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-250383-m03\" does not exist"
	I0802 18:14:13.085674       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:14:13.096880       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-250383-m03" podCIDRs=["10.244.2.0/24"]
	I0802 18:14:16.360616       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-250383-m03"
	I0802 18:14:33.544634       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:15:01.368336       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:15:02.443544       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:15:02.443611       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-250383-m03\" does not exist"
	I0802 18:15:02.461051       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-250383-m03" podCIDRs=["10.244.3.0/24"]
	I0802 18:15:21.784700       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:16:01.412640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m03"
	I0802 18:16:01.459708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.833317ms"
	I0802 18:16:01.459786       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.604µs"
	
	
	==> kube-controller-manager [aceac5df534eaa7f8cee9a49da8430b46c9228e0609dede1e2d195b1a6234af6] <==
	I0802 18:19:28.550988       1 shared_informer.go:320] Caches are synced for garbage collector
	I0802 18:19:28.589741       1 shared_informer.go:320] Caches are synced for garbage collector
	I0802 18:19:28.589967       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0802 18:19:52.463457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.130091ms"
	I0802 18:19:52.471655       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.989484ms"
	I0802 18:19:52.471936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.58µs"
	I0802 18:19:53.863745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.659µs"
	I0802 18:19:57.051380       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-250383-m02\" does not exist"
	I0802 18:19:57.061965       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-250383-m02" podCIDRs=["10.244.1.0/24"]
	I0802 18:19:58.942833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.27µs"
	I0802 18:19:58.983544       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.888µs"
	I0802 18:19:58.991076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.857µs"
	I0802 18:19:59.014724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.644µs"
	I0802 18:19:59.024194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.736µs"
	I0802 18:19:59.026299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.767µs"
	I0802 18:20:16.573835       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:20:16.592119       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.686µs"
	I0802 18:20:16.606202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.202µs"
	I0802 18:20:20.180866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.513731ms"
	I0802 18:20:20.181077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.26µs"
	I0802 18:20:34.528379       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:20:35.602946       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:20:35.603544       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-250383-m03\" does not exist"
	I0802 18:20:35.626991       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-250383-m03" podCIDRs=["10.244.2.0/24"]
	I0802 18:20:54.850311       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	
	
	==> kube-proxy [26eebc9ebbff1976ff7d1e06136733e5480d90e28bfe93063a2e4a07ca42988f] <==
	I0802 18:19:16.617757       1 server_linux.go:69] "Using iptables proxy"
	I0802 18:19:16.631760       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	I0802 18:19:16.696037       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 18:19:16.696109       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 18:19:16.696126       1 server_linux.go:165] "Using iptables Proxier"
	I0802 18:19:16.699172       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 18:19:16.699512       1 server.go:872] "Version info" version="v1.30.3"
	I0802 18:19:16.699592       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:19:16.700887       1 config.go:192] "Starting service config controller"
	I0802 18:19:16.700971       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 18:19:16.701058       1 config.go:101] "Starting endpoint slice config controller"
	I0802 18:19:16.701102       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 18:19:16.701735       1 config.go:319] "Starting node config controller"
	I0802 18:19:16.701805       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 18:19:16.802028       1 shared_informer.go:320] Caches are synced for node config
	I0802 18:19:16.802128       1 shared_informer.go:320] Caches are synced for service config
	I0802 18:19:16.802156       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee] <==
	I0802 18:12:32.548287       1 server_linux.go:69] "Using iptables proxy"
	I0802 18:12:32.563960       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	I0802 18:12:32.612985       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 18:12:32.613078       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 18:12:32.613110       1 server_linux.go:165] "Using iptables Proxier"
	I0802 18:12:32.617128       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 18:12:32.617622       1 server.go:872] "Version info" version="v1.30.3"
	I0802 18:12:32.617687       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:12:32.619367       1 config.go:192] "Starting service config controller"
	I0802 18:12:32.619612       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 18:12:32.619670       1 config.go:101] "Starting endpoint slice config controller"
	I0802 18:12:32.619676       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 18:12:32.620798       1 config.go:319] "Starting node config controller"
	I0802 18:12:32.620829       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 18:12:32.720013       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0802 18:12:32.720127       1 shared_informer.go:320] Caches are synced for service config
	I0802 18:12:32.721005       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5cbb8f6618e46a91e9ff90c4351c77c97371f0b25a3189239891e0b0777810d7] <==
	I0802 18:19:14.466763       1 serving.go:380] Generated self-signed cert in-memory
	W0802 18:19:15.627301       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 18:19:15.627372       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 18:19:15.627383       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 18:19:15.627389       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 18:19:15.667417       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0802 18:19:15.669270       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:19:15.673208       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 18:19:15.673243       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 18:19:15.673811       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0802 18:19:15.673992       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 18:19:15.773606       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91] <==
	E0802 18:12:14.999744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 18:12:14.999875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 18:12:14.999941       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 18:12:15.815085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 18:12:15.815140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 18:12:15.913760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 18:12:15.914131       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 18:12:15.917412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 18:12:15.917587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0802 18:12:15.968731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 18:12:15.968866       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 18:12:16.024100       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 18:12:16.024144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 18:12:16.037621       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 18:12:16.037721       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 18:12:16.117678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 18:12:16.117801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 18:12:16.211388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 18:12:16.211553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 18:12:16.246025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 18:12:16.246523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 18:12:16.258345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 18:12:16.258584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0802 18:12:17.887340       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0802 18:17:26.953612       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 02 18:19:12 multinode-250383 kubelet[3198]: E0802 18:19:12.740376    3198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.67:8443: connect: connection refused
	Aug 02 18:19:13 multinode-250383 kubelet[3198]: I0802 18:19:13.272508    3198 kubelet_node_status.go:73] "Attempting to register node" node="multinode-250383"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.720912    3198 kubelet_node_status.go:112] "Node was previously registered" node="multinode-250383"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.721339    3198 kubelet_node_status.go:76] "Successfully registered node" node="multinode-250383"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.722831    3198 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.723832    3198 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.739559    3198 apiserver.go:52] "Watching apiserver"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.749518    3198 topology_manager.go:215] "Topology Admit Handler" podUID="b262e69d-3b94-44ce-aae2-f309fece26ab" podNamespace="kube-system" podName="coredns-7db6d8ff4d-p22xc"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.749638    3198 topology_manager.go:215] "Topology Admit Handler" podUID="43861b63-f926-47e1-a17d-4fe2f162b13b" podNamespace="kube-system" podName="kindnet-k47qb"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.749701    3198 topology_manager.go:215] "Topology Admit Handler" podUID="e54c69c1-fdde-43c6-90d5-cd2171a4b1bc" podNamespace="kube-system" podName="kube-proxy-sjq5b"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.749738    3198 topology_manager.go:215] "Topology Admit Handler" podUID="816ce1dc-8f89-4c43-bdaf-6916dc76f56d" podNamespace="kube-system" podName="storage-provisioner"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.749786    3198 topology_manager.go:215] "Topology Admit Handler" podUID="e30d8939-3bac-44f5-9d29-1b79a4e40748" podNamespace="default" podName="busybox-fc5497c4f-6vqf8"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.757079    3198 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.832265    3198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e54c69c1-fdde-43c6-90d5-cd2171a4b1bc-lib-modules\") pod \"kube-proxy-sjq5b\" (UID: \"e54c69c1-fdde-43c6-90d5-cd2171a4b1bc\") " pod="kube-system/kube-proxy-sjq5b"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.833117    3198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/816ce1dc-8f89-4c43-bdaf-6916dc76f56d-tmp\") pod \"storage-provisioner\" (UID: \"816ce1dc-8f89-4c43-bdaf-6916dc76f56d\") " pod="kube-system/storage-provisioner"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.833716    3198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43861b63-f926-47e1-a17d-4fe2f162b13b-xtables-lock\") pod \"kindnet-k47qb\" (UID: \"43861b63-f926-47e1-a17d-4fe2f162b13b\") " pod="kube-system/kindnet-k47qb"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.833825    3198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/43861b63-f926-47e1-a17d-4fe2f162b13b-cni-cfg\") pod \"kindnet-k47qb\" (UID: \"43861b63-f926-47e1-a17d-4fe2f162b13b\") " pod="kube-system/kindnet-k47qb"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.834136    3198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43861b63-f926-47e1-a17d-4fe2f162b13b-lib-modules\") pod \"kindnet-k47qb\" (UID: \"43861b63-f926-47e1-a17d-4fe2f162b13b\") " pod="kube-system/kindnet-k47qb"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.834449    3198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e54c69c1-fdde-43c6-90d5-cd2171a4b1bc-xtables-lock\") pod \"kube-proxy-sjq5b\" (UID: \"e54c69c1-fdde-43c6-90d5-cd2171a4b1bc\") " pod="kube-system/kube-proxy-sjq5b"
	Aug 02 18:19:19 multinode-250383 kubelet[3198]: I0802 18:19:19.926752    3198 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 02 18:20:11 multinode-250383 kubelet[3198]: E0802 18:20:11.853610    3198 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 18:20:11 multinode-250383 kubelet[3198]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 18:20:11 multinode-250383 kubelet[3198]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 18:20:11 multinode-250383 kubelet[3198]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 18:20:11 multinode-250383 kubelet[3198]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:20:57.302036   43076 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19355-5397/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-250383 -n multinode-250383
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-250383 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (334.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 stop
E0802 18:22:43.928221   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-250383 stop: exit status 82 (2m0.460581037s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-250383-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-250383 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-250383 status: exit status 3 (18.782798532s)

                                                
                                                
-- stdout --
	multinode-250383
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-250383-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:23:20.479453   43742 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.114:22: connect: no route to host
	E0802 18:23:20.479495   43742 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.114:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-250383 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-250383 -n multinode-250383
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-250383 logs -n 25: (1.396330198s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m02:/home/docker/cp-test.txt                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383:/home/docker/cp-test_multinode-250383-m02_multinode-250383.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n multinode-250383 sudo cat                                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /home/docker/cp-test_multinode-250383-m02_multinode-250383.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m02:/home/docker/cp-test.txt                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03:/home/docker/cp-test_multinode-250383-m02_multinode-250383-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n multinode-250383-m03 sudo cat                                   | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /home/docker/cp-test_multinode-250383-m02_multinode-250383-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp testdata/cp-test.txt                                                | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m03:/home/docker/cp-test.txt                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile879850024/001/cp-test_multinode-250383-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m03:/home/docker/cp-test.txt                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383:/home/docker/cp-test_multinode-250383-m03_multinode-250383.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n multinode-250383 sudo cat                                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /home/docker/cp-test_multinode-250383-m03_multinode-250383.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m03:/home/docker/cp-test.txt                       | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m02:/home/docker/cp-test_multinode-250383-m03_multinode-250383-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n multinode-250383-m02 sudo cat                                   | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /home/docker/cp-test_multinode-250383-m03_multinode-250383-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-250383 node stop m03                                                          | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	| node    | multinode-250383 node start                                                             | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:15 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-250383                                                                | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:15 UTC |                     |
	| stop    | -p multinode-250383                                                                     | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:15 UTC |                     |
	| start   | -p multinode-250383                                                                     | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:17 UTC | 02 Aug 24 18:20 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-250383                                                                | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:20 UTC |                     |
	| node    | multinode-250383 node delete                                                            | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:20 UTC | 02 Aug 24 18:21 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-250383 stop                                                                   | multinode-250383 | jenkins | v1.33.1 | 02 Aug 24 18:21 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:17:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:17:26.088966   41488 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:17:26.089225   41488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:17:26.089234   41488 out.go:304] Setting ErrFile to fd 2...
	I0802 18:17:26.089238   41488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:17:26.089402   41488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:17:26.089905   41488 out.go:298] Setting JSON to false
	I0802 18:17:26.090846   41488 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3590,"bootTime":1722619056,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:17:26.090905   41488 start.go:139] virtualization: kvm guest
	I0802 18:17:26.093329   41488 out.go:177] * [multinode-250383] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:17:26.094555   41488 notify.go:220] Checking for updates...
	I0802 18:17:26.094559   41488 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:17:26.095956   41488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:17:26.097217   41488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:17:26.098356   41488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:17:26.099530   41488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:17:26.100670   41488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:17:26.102131   41488 config.go:182] Loaded profile config "multinode-250383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:17:26.102211   41488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:17:26.102602   41488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:17:26.102655   41488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:17:26.118491   41488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0802 18:17:26.118891   41488 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:17:26.119451   41488 main.go:141] libmachine: Using API Version  1
	I0802 18:17:26.119474   41488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:17:26.119818   41488 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:17:26.120022   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:17:26.154923   41488 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:17:26.156120   41488 start.go:297] selected driver: kvm2
	I0802 18:17:26.156136   41488 start.go:901] validating driver "kvm2" against &{Name:multinode-250383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-250383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:17:26.156256   41488 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:17:26.156595   41488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:17:26.156660   41488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:17:26.170789   41488 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:17:26.171454   41488 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:17:26.171514   41488 cni.go:84] Creating CNI manager for ""
	I0802 18:17:26.171525   41488 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0802 18:17:26.171592   41488 start.go:340] cluster config:
	{Name:multinode-250383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-250383 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:17:26.171716   41488 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:17:26.174175   41488 out.go:177] * Starting "multinode-250383" primary control-plane node in "multinode-250383" cluster
	I0802 18:17:26.175398   41488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:17:26.175429   41488 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 18:17:26.175436   41488 cache.go:56] Caching tarball of preloaded images
	I0802 18:17:26.175509   41488 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:17:26.175519   41488 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 18:17:26.175625   41488 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/config.json ...
	I0802 18:17:26.175802   41488 start.go:360] acquireMachinesLock for multinode-250383: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:17:26.175840   41488 start.go:364] duration metric: took 20.542µs to acquireMachinesLock for "multinode-250383"
	I0802 18:17:26.175853   41488 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:17:26.175859   41488 fix.go:54] fixHost starting: 
	I0802 18:17:26.176145   41488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:17:26.176175   41488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:17:26.190418   41488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41053
	I0802 18:17:26.190806   41488 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:17:26.191272   41488 main.go:141] libmachine: Using API Version  1
	I0802 18:17:26.191295   41488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:17:26.191631   41488 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:17:26.191865   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:17:26.192025   41488 main.go:141] libmachine: (multinode-250383) Calling .GetState
	I0802 18:17:26.193653   41488 fix.go:112] recreateIfNeeded on multinode-250383: state=Running err=<nil>
	W0802 18:17:26.193674   41488 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:17:26.195547   41488 out.go:177] * Updating the running kvm2 "multinode-250383" VM ...
	I0802 18:17:26.196690   41488 machine.go:94] provisionDockerMachine start ...
	I0802 18:17:26.196707   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:17:26.196905   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.199456   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.199867   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.199895   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.200099   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:17:26.200274   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.200443   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.200571   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:17:26.200737   41488 main.go:141] libmachine: Using SSH client type: native
	I0802 18:17:26.200975   41488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0802 18:17:26.200990   41488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:17:26.304727   41488 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-250383
	
	I0802 18:17:26.304785   41488 main.go:141] libmachine: (multinode-250383) Calling .GetMachineName
	I0802 18:17:26.305080   41488 buildroot.go:166] provisioning hostname "multinode-250383"
	I0802 18:17:26.305108   41488 main.go:141] libmachine: (multinode-250383) Calling .GetMachineName
	I0802 18:17:26.305320   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.308069   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.308417   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.308447   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.308595   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:17:26.308746   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.308885   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.309034   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:17:26.309220   41488 main.go:141] libmachine: Using SSH client type: native
	I0802 18:17:26.309386   41488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0802 18:17:26.309401   41488 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-250383 && echo "multinode-250383" | sudo tee /etc/hostname
	I0802 18:17:26.422357   41488 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-250383
	
	I0802 18:17:26.422395   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.425066   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.425438   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.425469   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.425593   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:17:26.425781   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.425951   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.426094   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:17:26.426263   41488 main.go:141] libmachine: Using SSH client type: native
	I0802 18:17:26.426476   41488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0802 18:17:26.426493   41488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-250383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-250383/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-250383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:17:26.527895   41488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:17:26.527923   41488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:17:26.527967   41488 buildroot.go:174] setting up certificates
	I0802 18:17:26.527983   41488 provision.go:84] configureAuth start
	I0802 18:17:26.528000   41488 main.go:141] libmachine: (multinode-250383) Calling .GetMachineName
	I0802 18:17:26.528325   41488 main.go:141] libmachine: (multinode-250383) Calling .GetIP
	I0802 18:17:26.530779   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.531163   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.531193   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.531305   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.533165   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.533517   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.533545   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.533651   41488 provision.go:143] copyHostCerts
	I0802 18:17:26.533680   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:17:26.533719   41488 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:17:26.533729   41488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:17:26.533806   41488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:17:26.533917   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:17:26.533944   41488 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:17:26.533954   41488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:17:26.533994   41488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:17:26.534066   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:17:26.534092   41488 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:17:26.534101   41488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:17:26.534135   41488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:17:26.534213   41488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.multinode-250383 san=[127.0.0.1 192.168.39.67 localhost minikube multinode-250383]
	I0802 18:17:26.685070   41488 provision.go:177] copyRemoteCerts
	I0802 18:17:26.685132   41488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:17:26.685160   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.687446   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.687788   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.687813   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.687953   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:17:26.688138   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.688323   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:17:26.688473   41488 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/multinode-250383/id_rsa Username:docker}
	I0802 18:17:26.769150   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0802 18:17:26.769216   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:17:26.793229   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0802 18:17:26.793298   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0802 18:17:26.815945   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0802 18:17:26.816011   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:17:26.839453   41488 provision.go:87] duration metric: took 311.454127ms to configureAuth
	I0802 18:17:26.839488   41488 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:17:26.839726   41488 config.go:182] Loaded profile config "multinode-250383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:17:26.839790   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:17:26.842349   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.842732   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:17:26.842759   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:17:26.842963   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:17:26.843180   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.843334   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:17:26.843465   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:17:26.843576   41488 main.go:141] libmachine: Using SSH client type: native
	I0802 18:17:26.843794   41488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0802 18:17:26.843820   41488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:18:57.547830   41488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:18:57.547859   41488 machine.go:97] duration metric: took 1m31.351156538s to provisionDockerMachine
	I0802 18:18:57.547873   41488 start.go:293] postStartSetup for "multinode-250383" (driver="kvm2")
	I0802 18:18:57.547887   41488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:18:57.547910   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:18:57.548286   41488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:18:57.548333   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:18:57.551416   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.551832   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:18:57.551866   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.551978   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:18:57.552202   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:18:57.552390   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:18:57.552558   41488 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/multinode-250383/id_rsa Username:docker}
	I0802 18:18:57.634466   41488 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:18:57.638732   41488 command_runner.go:130] > NAME=Buildroot
	I0802 18:18:57.638751   41488 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0802 18:18:57.638757   41488 command_runner.go:130] > ID=buildroot
	I0802 18:18:57.638764   41488 command_runner.go:130] > VERSION_ID=2023.02.9
	I0802 18:18:57.638771   41488 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0802 18:18:57.638802   41488 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:18:57.638819   41488 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:18:57.638901   41488 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:18:57.638980   41488 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:18:57.638990   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /etc/ssl/certs/125472.pem
	I0802 18:18:57.639078   41488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:18:57.649063   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:18:57.671904   41488 start.go:296] duration metric: took 124.016972ms for postStartSetup
	I0802 18:18:57.671954   41488 fix.go:56] duration metric: took 1m31.49609375s for fixHost
	I0802 18:18:57.671982   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:18:57.674613   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.675029   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:18:57.675046   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.675228   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:18:57.675445   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:18:57.675641   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:18:57.675867   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:18:57.676076   41488 main.go:141] libmachine: Using SSH client type: native
	I0802 18:18:57.676233   41488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.67 22 <nil> <nil>}
	I0802 18:18:57.676243   41488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 18:18:57.775629   41488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722622737.740075434
	
	I0802 18:18:57.775663   41488 fix.go:216] guest clock: 1722622737.740075434
	I0802 18:18:57.775670   41488 fix.go:229] Guest: 2024-08-02 18:18:57.740075434 +0000 UTC Remote: 2024-08-02 18:18:57.671960943 +0000 UTC m=+91.617649118 (delta=68.114491ms)
	I0802 18:18:57.775693   41488 fix.go:200] guest clock delta is within tolerance: 68.114491ms
	I0802 18:18:57.775699   41488 start.go:83] releasing machines lock for "multinode-250383", held for 1m31.599850585s
	I0802 18:18:57.775716   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:18:57.776190   41488 main.go:141] libmachine: (multinode-250383) Calling .GetIP
	I0802 18:18:57.778648   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.779074   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:18:57.779134   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.779302   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:18:57.779853   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:18:57.780027   41488 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:18:57.780136   41488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:18:57.780176   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:18:57.780195   41488 ssh_runner.go:195] Run: cat /version.json
	I0802 18:18:57.780213   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:18:57.782716   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.782960   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.783017   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:18:57.783041   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.783217   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:18:57.783391   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:18:57.783444   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:18:57.783484   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:18:57.783512   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:18:57.783683   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:18:57.783706   41488 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/multinode-250383/id_rsa Username:docker}
	I0802 18:18:57.783847   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:18:57.783981   41488 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:18:57.784120   41488 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/multinode-250383/id_rsa Username:docker}
	I0802 18:18:57.863814   41488 command_runner.go:130] > {"iso_version": "v1.33.1-1722420371-19355", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "7d72c3be84f92807e8ddb66796778c6727075dd6"}
	I0802 18:18:57.863998   41488 ssh_runner.go:195] Run: systemctl --version
	I0802 18:18:57.899634   41488 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0802 18:18:57.900337   41488 command_runner.go:130] > systemd 252 (252)
	I0802 18:18:57.900372   41488 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0802 18:18:57.900442   41488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:18:58.068546   41488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0802 18:18:58.077817   41488 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0802 18:18:58.078094   41488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:18:58.078168   41488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:18:58.087467   41488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0802 18:18:58.087492   41488 start.go:495] detecting cgroup driver to use...
	I0802 18:18:58.087543   41488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:18:58.103626   41488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:18:58.116340   41488 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:18:58.116400   41488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:18:58.129519   41488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:18:58.142471   41488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:18:58.287053   41488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:18:58.428960   41488 docker.go:233] disabling docker service ...
	I0802 18:18:58.429060   41488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:18:58.446663   41488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:18:58.460091   41488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:18:58.600929   41488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:18:58.761239   41488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:18:58.810722   41488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:18:58.845464   41488 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0802 18:18:58.845504   41488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 18:18:58.845546   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.860397   41488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:18:58.860460   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.878457   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.888663   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.898861   41488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:18:58.913226   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.924536   41488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.942722   41488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:18:58.953656   41488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:18:58.966199   41488 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0802 18:18:58.966280   41488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:18:58.978009   41488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:18:59.148179   41488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:19:09.361123   41488 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.212901234s)
	I0802 18:19:09.361162   41488 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:19:09.361220   41488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:19:09.365885   41488 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0802 18:19:09.365905   41488 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0802 18:19:09.365912   41488 command_runner.go:130] > Device: 0,22	Inode: 1405        Links: 1
	I0802 18:19:09.365919   41488 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0802 18:19:09.365924   41488 command_runner.go:130] > Access: 2024-08-02 18:19:09.186228949 +0000
	I0802 18:19:09.365930   41488 command_runner.go:130] > Modify: 2024-08-02 18:19:09.186228949 +0000
	I0802 18:19:09.365937   41488 command_runner.go:130] > Change: 2024-08-02 18:19:09.186228949 +0000
	I0802 18:19:09.365947   41488 command_runner.go:130] >  Birth: -
	I0802 18:19:09.366078   41488 start.go:563] Will wait 60s for crictl version
	I0802 18:19:09.366121   41488 ssh_runner.go:195] Run: which crictl
	I0802 18:19:09.369606   41488 command_runner.go:130] > /usr/bin/crictl
	I0802 18:19:09.369658   41488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:19:09.403989   41488 command_runner.go:130] > Version:  0.1.0
	I0802 18:19:09.404011   41488 command_runner.go:130] > RuntimeName:  cri-o
	I0802 18:19:09.404016   41488 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0802 18:19:09.404020   41488 command_runner.go:130] > RuntimeApiVersion:  v1
	I0802 18:19:09.405022   41488 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:19:09.405114   41488 ssh_runner.go:195] Run: crio --version
	I0802 18:19:09.430997   41488 command_runner.go:130] > crio version 1.29.1
	I0802 18:19:09.431019   41488 command_runner.go:130] > Version:        1.29.1
	I0802 18:19:09.431025   41488 command_runner.go:130] > GitCommit:      unknown
	I0802 18:19:09.431029   41488 command_runner.go:130] > GitCommitDate:  unknown
	I0802 18:19:09.431033   41488 command_runner.go:130] > GitTreeState:   clean
	I0802 18:19:09.431038   41488 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0802 18:19:09.431042   41488 command_runner.go:130] > GoVersion:      go1.21.6
	I0802 18:19:09.431048   41488 command_runner.go:130] > Compiler:       gc
	I0802 18:19:09.431054   41488 command_runner.go:130] > Platform:       linux/amd64
	I0802 18:19:09.431060   41488 command_runner.go:130] > Linkmode:       dynamic
	I0802 18:19:09.431066   41488 command_runner.go:130] > BuildTags:      
	I0802 18:19:09.431073   41488 command_runner.go:130] >   containers_image_ostree_stub
	I0802 18:19:09.431083   41488 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0802 18:19:09.431088   41488 command_runner.go:130] >   btrfs_noversion
	I0802 18:19:09.431093   41488 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0802 18:19:09.431107   41488 command_runner.go:130] >   libdm_no_deferred_remove
	I0802 18:19:09.431117   41488 command_runner.go:130] >   seccomp
	I0802 18:19:09.431149   41488 command_runner.go:130] > LDFlags:          unknown
	I0802 18:19:09.431157   41488 command_runner.go:130] > SeccompEnabled:   true
	I0802 18:19:09.431161   41488 command_runner.go:130] > AppArmorEnabled:  false
	I0802 18:19:09.432348   41488 ssh_runner.go:195] Run: crio --version
	I0802 18:19:09.459853   41488 command_runner.go:130] > crio version 1.29.1
	I0802 18:19:09.459881   41488 command_runner.go:130] > Version:        1.29.1
	I0802 18:19:09.459889   41488 command_runner.go:130] > GitCommit:      unknown
	I0802 18:19:09.459895   41488 command_runner.go:130] > GitCommitDate:  unknown
	I0802 18:19:09.459900   41488 command_runner.go:130] > GitTreeState:   clean
	I0802 18:19:09.459912   41488 command_runner.go:130] > BuildDate:      2024-07-31T15:55:08Z
	I0802 18:19:09.459919   41488 command_runner.go:130] > GoVersion:      go1.21.6
	I0802 18:19:09.459925   41488 command_runner.go:130] > Compiler:       gc
	I0802 18:19:09.459929   41488 command_runner.go:130] > Platform:       linux/amd64
	I0802 18:19:09.459934   41488 command_runner.go:130] > Linkmode:       dynamic
	I0802 18:19:09.459944   41488 command_runner.go:130] > BuildTags:      
	I0802 18:19:09.459951   41488 command_runner.go:130] >   containers_image_ostree_stub
	I0802 18:19:09.459955   41488 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0802 18:19:09.459960   41488 command_runner.go:130] >   btrfs_noversion
	I0802 18:19:09.459964   41488 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0802 18:19:09.459972   41488 command_runner.go:130] >   libdm_no_deferred_remove
	I0802 18:19:09.459975   41488 command_runner.go:130] >   seccomp
	I0802 18:19:09.459980   41488 command_runner.go:130] > LDFlags:          unknown
	I0802 18:19:09.459983   41488 command_runner.go:130] > SeccompEnabled:   true
	I0802 18:19:09.459987   41488 command_runner.go:130] > AppArmorEnabled:  false
	I0802 18:19:09.464064   41488 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 18:19:09.465711   41488 main.go:141] libmachine: (multinode-250383) Calling .GetIP
	I0802 18:19:09.468327   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:19:09.468796   41488 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:19:09.468822   41488 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:19:09.469006   41488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 18:19:09.472907   41488 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0802 18:19:09.473117   41488 kubeadm.go:883] updating cluster {Name:multinode-250383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-250383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:19:09.473310   41488 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:19:09.473371   41488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:19:09.518537   41488 command_runner.go:130] > {
	I0802 18:19:09.518563   41488 command_runner.go:130] >   "images": [
	I0802 18:19:09.518570   41488 command_runner.go:130] >     {
	I0802 18:19:09.518581   41488 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0802 18:19:09.518586   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.518591   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0802 18:19:09.518595   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518601   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.518627   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0802 18:19:09.518640   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0802 18:19:09.518646   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518654   41488 command_runner.go:130] >       "size": "87165492",
	I0802 18:19:09.518660   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.518667   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.518676   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.518686   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.518692   41488 command_runner.go:130] >     },
	I0802 18:19:09.518700   41488 command_runner.go:130] >     {
	I0802 18:19:09.518710   41488 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0802 18:19:09.518717   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.518726   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0802 18:19:09.518735   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518739   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.518748   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0802 18:19:09.518757   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0802 18:19:09.518761   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518767   41488 command_runner.go:130] >       "size": "87174707",
	I0802 18:19:09.518771   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.518789   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.518795   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.518799   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.518803   41488 command_runner.go:130] >     },
	I0802 18:19:09.518809   41488 command_runner.go:130] >     {
	I0802 18:19:09.518819   41488 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0802 18:19:09.518830   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.518837   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0802 18:19:09.518842   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518848   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.518859   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0802 18:19:09.518870   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0802 18:19:09.518875   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518882   41488 command_runner.go:130] >       "size": "1363676",
	I0802 18:19:09.518887   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.518894   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.518900   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.518906   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.518911   41488 command_runner.go:130] >     },
	I0802 18:19:09.518918   41488 command_runner.go:130] >     {
	I0802 18:19:09.518931   41488 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0802 18:19:09.518940   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.518951   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0802 18:19:09.518957   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518961   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.518970   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0802 18:19:09.518987   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0802 18:19:09.518994   41488 command_runner.go:130] >       ],
	I0802 18:19:09.518998   41488 command_runner.go:130] >       "size": "31470524",
	I0802 18:19:09.519002   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.519006   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519012   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519016   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519022   41488 command_runner.go:130] >     },
	I0802 18:19:09.519025   41488 command_runner.go:130] >     {
	I0802 18:19:09.519033   41488 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0802 18:19:09.519041   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519048   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0802 18:19:09.519054   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519058   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519067   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0802 18:19:09.519076   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0802 18:19:09.519081   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519085   41488 command_runner.go:130] >       "size": "61245718",
	I0802 18:19:09.519094   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.519113   41488 command_runner.go:130] >       "username": "nonroot",
	I0802 18:19:09.519123   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519129   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519134   41488 command_runner.go:130] >     },
	I0802 18:19:09.519139   41488 command_runner.go:130] >     {
	I0802 18:19:09.519145   41488 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0802 18:19:09.519151   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519156   41488 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0802 18:19:09.519162   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519165   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519174   41488 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0802 18:19:09.519181   41488 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0802 18:19:09.519186   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519191   41488 command_runner.go:130] >       "size": "150779692",
	I0802 18:19:09.519196   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.519200   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.519205   41488 command_runner.go:130] >       },
	I0802 18:19:09.519209   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519214   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519218   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519223   41488 command_runner.go:130] >     },
	I0802 18:19:09.519226   41488 command_runner.go:130] >     {
	I0802 18:19:09.519234   41488 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0802 18:19:09.519238   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519245   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0802 18:19:09.519251   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519254   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519270   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0802 18:19:09.519280   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0802 18:19:09.519286   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519290   41488 command_runner.go:130] >       "size": "117609954",
	I0802 18:19:09.519295   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.519299   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.519305   41488 command_runner.go:130] >       },
	I0802 18:19:09.519308   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519315   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519319   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519324   41488 command_runner.go:130] >     },
	I0802 18:19:09.519328   41488 command_runner.go:130] >     {
	I0802 18:19:09.519335   41488 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0802 18:19:09.519339   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519347   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0802 18:19:09.519350   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519356   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519377   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0802 18:19:09.519387   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0802 18:19:09.519393   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519397   41488 command_runner.go:130] >       "size": "112198984",
	I0802 18:19:09.519402   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.519406   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.519412   41488 command_runner.go:130] >       },
	I0802 18:19:09.519416   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519420   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519423   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519426   41488 command_runner.go:130] >     },
	I0802 18:19:09.519429   41488 command_runner.go:130] >     {
	I0802 18:19:09.519434   41488 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0802 18:19:09.519438   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519442   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0802 18:19:09.519446   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519449   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519456   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0802 18:19:09.519462   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0802 18:19:09.519470   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519479   41488 command_runner.go:130] >       "size": "85953945",
	I0802 18:19:09.519482   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.519485   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519489   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519492   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519495   41488 command_runner.go:130] >     },
	I0802 18:19:09.519498   41488 command_runner.go:130] >     {
	I0802 18:19:09.519504   41488 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0802 18:19:09.519507   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519512   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0802 18:19:09.519518   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519522   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519531   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0802 18:19:09.519540   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0802 18:19:09.519545   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519549   41488 command_runner.go:130] >       "size": "63051080",
	I0802 18:19:09.519560   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.519566   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.519569   41488 command_runner.go:130] >       },
	I0802 18:19:09.519576   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519580   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519584   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.519587   41488 command_runner.go:130] >     },
	I0802 18:19:09.519590   41488 command_runner.go:130] >     {
	I0802 18:19:09.519596   41488 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0802 18:19:09.519602   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.519606   41488 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0802 18:19:09.519616   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519622   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.519628   41488 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0802 18:19:09.519637   41488 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0802 18:19:09.519642   41488 command_runner.go:130] >       ],
	I0802 18:19:09.519646   41488 command_runner.go:130] >       "size": "750414",
	I0802 18:19:09.519651   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.519655   41488 command_runner.go:130] >         "value": "65535"
	I0802 18:19:09.519665   41488 command_runner.go:130] >       },
	I0802 18:19:09.519670   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.519676   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.519680   41488 command_runner.go:130] >       "pinned": true
	I0802 18:19:09.519685   41488 command_runner.go:130] >     }
	I0802 18:19:09.519688   41488 command_runner.go:130] >   ]
	I0802 18:19:09.519693   41488 command_runner.go:130] > }
	I0802 18:19:09.519886   41488 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:19:09.519901   41488 crio.go:433] Images already preloaded, skipping extraction
	I0802 18:19:09.519945   41488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:19:09.551805   41488 command_runner.go:130] > {
	I0802 18:19:09.551825   41488 command_runner.go:130] >   "images": [
	I0802 18:19:09.551829   41488 command_runner.go:130] >     {
	I0802 18:19:09.551838   41488 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0802 18:19:09.551842   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.551848   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0802 18:19:09.551853   41488 command_runner.go:130] >       ],
	I0802 18:19:09.551857   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.551864   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0802 18:19:09.551871   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0802 18:19:09.551875   41488 command_runner.go:130] >       ],
	I0802 18:19:09.551879   41488 command_runner.go:130] >       "size": "87165492",
	I0802 18:19:09.551883   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.551886   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.551893   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.551901   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.551906   41488 command_runner.go:130] >     },
	I0802 18:19:09.551913   41488 command_runner.go:130] >     {
	I0802 18:19:09.551921   41488 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0802 18:19:09.551927   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.551939   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0802 18:19:09.551944   41488 command_runner.go:130] >       ],
	I0802 18:19:09.551951   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.551962   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0802 18:19:09.551973   41488 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0802 18:19:09.551985   41488 command_runner.go:130] >       ],
	I0802 18:19:09.551991   41488 command_runner.go:130] >       "size": "87174707",
	I0802 18:19:09.551994   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.552001   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552005   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552009   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552013   41488 command_runner.go:130] >     },
	I0802 18:19:09.552016   41488 command_runner.go:130] >     {
	I0802 18:19:09.552022   41488 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0802 18:19:09.552027   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552031   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0802 18:19:09.552037   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552040   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552047   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0802 18:19:09.552057   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0802 18:19:09.552061   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552065   41488 command_runner.go:130] >       "size": "1363676",
	I0802 18:19:09.552069   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.552072   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552076   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552080   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552085   41488 command_runner.go:130] >     },
	I0802 18:19:09.552088   41488 command_runner.go:130] >     {
	I0802 18:19:09.552094   41488 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0802 18:19:09.552100   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552106   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0802 18:19:09.552109   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552112   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552120   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0802 18:19:09.552137   41488 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0802 18:19:09.552143   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552147   41488 command_runner.go:130] >       "size": "31470524",
	I0802 18:19:09.552151   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.552155   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552158   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552162   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552177   41488 command_runner.go:130] >     },
	I0802 18:19:09.552181   41488 command_runner.go:130] >     {
	I0802 18:19:09.552186   41488 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0802 18:19:09.552190   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552194   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0802 18:19:09.552198   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552202   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552209   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0802 18:19:09.552218   41488 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0802 18:19:09.552221   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552225   41488 command_runner.go:130] >       "size": "61245718",
	I0802 18:19:09.552229   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.552232   41488 command_runner.go:130] >       "username": "nonroot",
	I0802 18:19:09.552236   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552244   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552249   41488 command_runner.go:130] >     },
	I0802 18:19:09.552252   41488 command_runner.go:130] >     {
	I0802 18:19:09.552258   41488 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0802 18:19:09.552264   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552268   41488 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0802 18:19:09.552273   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552277   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552285   41488 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0802 18:19:09.552292   41488 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0802 18:19:09.552298   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552301   41488 command_runner.go:130] >       "size": "150779692",
	I0802 18:19:09.552305   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.552311   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.552314   41488 command_runner.go:130] >       },
	I0802 18:19:09.552319   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552322   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552327   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552330   41488 command_runner.go:130] >     },
	I0802 18:19:09.552334   41488 command_runner.go:130] >     {
	I0802 18:19:09.552341   41488 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0802 18:19:09.552345   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552359   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0802 18:19:09.552366   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552370   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552379   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0802 18:19:09.552389   41488 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0802 18:19:09.552393   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552397   41488 command_runner.go:130] >       "size": "117609954",
	I0802 18:19:09.552403   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.552407   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.552411   41488 command_runner.go:130] >       },
	I0802 18:19:09.552415   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552419   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552423   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552426   41488 command_runner.go:130] >     },
	I0802 18:19:09.552429   41488 command_runner.go:130] >     {
	I0802 18:19:09.552436   41488 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0802 18:19:09.552442   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552447   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0802 18:19:09.552452   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552456   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552476   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0802 18:19:09.552486   41488 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0802 18:19:09.552490   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552494   41488 command_runner.go:130] >       "size": "112198984",
	I0802 18:19:09.552497   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.552501   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.552504   41488 command_runner.go:130] >       },
	I0802 18:19:09.552508   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552512   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552521   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552527   41488 command_runner.go:130] >     },
	I0802 18:19:09.552530   41488 command_runner.go:130] >     {
	I0802 18:19:09.552536   41488 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0802 18:19:09.552542   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552547   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0802 18:19:09.552552   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552560   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552569   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0802 18:19:09.552579   41488 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0802 18:19:09.552583   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552587   41488 command_runner.go:130] >       "size": "85953945",
	I0802 18:19:09.552593   41488 command_runner.go:130] >       "uid": null,
	I0802 18:19:09.552597   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552610   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552616   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552620   41488 command_runner.go:130] >     },
	I0802 18:19:09.552623   41488 command_runner.go:130] >     {
	I0802 18:19:09.552629   41488 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0802 18:19:09.552633   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552638   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0802 18:19:09.552643   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552647   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552654   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0802 18:19:09.552663   41488 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0802 18:19:09.552669   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552673   41488 command_runner.go:130] >       "size": "63051080",
	I0802 18:19:09.552677   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.552681   41488 command_runner.go:130] >         "value": "0"
	I0802 18:19:09.552686   41488 command_runner.go:130] >       },
	I0802 18:19:09.552690   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552694   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552700   41488 command_runner.go:130] >       "pinned": false
	I0802 18:19:09.552703   41488 command_runner.go:130] >     },
	I0802 18:19:09.552707   41488 command_runner.go:130] >     {
	I0802 18:19:09.552712   41488 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0802 18:19:09.552718   41488 command_runner.go:130] >       "repoTags": [
	I0802 18:19:09.552722   41488 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0802 18:19:09.552725   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552730   41488 command_runner.go:130] >       "repoDigests": [
	I0802 18:19:09.552738   41488 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0802 18:19:09.552745   41488 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0802 18:19:09.552749   41488 command_runner.go:130] >       ],
	I0802 18:19:09.552760   41488 command_runner.go:130] >       "size": "750414",
	I0802 18:19:09.552766   41488 command_runner.go:130] >       "uid": {
	I0802 18:19:09.552770   41488 command_runner.go:130] >         "value": "65535"
	I0802 18:19:09.552774   41488 command_runner.go:130] >       },
	I0802 18:19:09.552777   41488 command_runner.go:130] >       "username": "",
	I0802 18:19:09.552781   41488 command_runner.go:130] >       "spec": null,
	I0802 18:19:09.552785   41488 command_runner.go:130] >       "pinned": true
	I0802 18:19:09.552788   41488 command_runner.go:130] >     }
	I0802 18:19:09.552791   41488 command_runner.go:130] >   ]
	I0802 18:19:09.552795   41488 command_runner.go:130] > }
	I0802 18:19:09.552912   41488 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:19:09.552923   41488 cache_images.go:84] Images are preloaded, skipping loading
	I0802 18:19:09.552930   41488 kubeadm.go:934] updating node { 192.168.39.67 8443 v1.30.3 crio true true} ...
	I0802 18:19:09.553048   41488 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-250383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-250383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:19:09.553111   41488 ssh_runner.go:195] Run: crio config
	I0802 18:19:09.593607   41488 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0802 18:19:09.593641   41488 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0802 18:19:09.593651   41488 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0802 18:19:09.593676   41488 command_runner.go:130] > #
	I0802 18:19:09.593689   41488 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0802 18:19:09.593700   41488 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0802 18:19:09.593709   41488 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0802 18:19:09.593738   41488 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0802 18:19:09.593749   41488 command_runner.go:130] > # reload'.
	I0802 18:19:09.593759   41488 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0802 18:19:09.593769   41488 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0802 18:19:09.593780   41488 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0802 18:19:09.593792   41488 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0802 18:19:09.593808   41488 command_runner.go:130] > [crio]
	I0802 18:19:09.593821   41488 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0802 18:19:09.593830   41488 command_runner.go:130] > # containers images, in this directory.
	I0802 18:19:09.593842   41488 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0802 18:19:09.593856   41488 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0802 18:19:09.593867   41488 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0802 18:19:09.593881   41488 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0802 18:19:09.594022   41488 command_runner.go:130] > # imagestore = ""
	I0802 18:19:09.594047   41488 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0802 18:19:09.594059   41488 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0802 18:19:09.594127   41488 command_runner.go:130] > storage_driver = "overlay"
	I0802 18:19:09.594144   41488 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0802 18:19:09.594154   41488 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0802 18:19:09.594163   41488 command_runner.go:130] > storage_option = [
	I0802 18:19:09.594247   41488 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0802 18:19:09.594355   41488 command_runner.go:130] > ]
	I0802 18:19:09.594371   41488 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0802 18:19:09.594383   41488 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0802 18:19:09.594453   41488 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0802 18:19:09.594468   41488 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0802 18:19:09.594483   41488 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0802 18:19:09.594491   41488 command_runner.go:130] > # always happen on a node reboot
	I0802 18:19:09.594678   41488 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0802 18:19:09.594719   41488 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0802 18:19:09.594730   41488 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0802 18:19:09.594738   41488 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0802 18:19:09.594814   41488 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0802 18:19:09.594835   41488 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0802 18:19:09.594848   41488 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0802 18:19:09.595125   41488 command_runner.go:130] > # internal_wipe = true
	I0802 18:19:09.595146   41488 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0802 18:19:09.595155   41488 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0802 18:19:09.595410   41488 command_runner.go:130] > # internal_repair = false
	I0802 18:19:09.595429   41488 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0802 18:19:09.595438   41488 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0802 18:19:09.595448   41488 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0802 18:19:09.595588   41488 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0802 18:19:09.595612   41488 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0802 18:19:09.595619   41488 command_runner.go:130] > [crio.api]
	I0802 18:19:09.595627   41488 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0802 18:19:09.595807   41488 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0802 18:19:09.595821   41488 command_runner.go:130] > # IP address on which the stream server will listen.
	I0802 18:19:09.596178   41488 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0802 18:19:09.596193   41488 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0802 18:19:09.596199   41488 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0802 18:19:09.596383   41488 command_runner.go:130] > # stream_port = "0"
	I0802 18:19:09.596393   41488 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0802 18:19:09.596632   41488 command_runner.go:130] > # stream_enable_tls = false
	I0802 18:19:09.596649   41488 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0802 18:19:09.596825   41488 command_runner.go:130] > # stream_idle_timeout = ""
	I0802 18:19:09.596841   41488 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0802 18:19:09.596850   41488 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0802 18:19:09.596871   41488 command_runner.go:130] > # minutes.
	I0802 18:19:09.597048   41488 command_runner.go:130] > # stream_tls_cert = ""
	I0802 18:19:09.597067   41488 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0802 18:19:09.597076   41488 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0802 18:19:09.597216   41488 command_runner.go:130] > # stream_tls_key = ""
	I0802 18:19:09.597230   41488 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0802 18:19:09.597240   41488 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0802 18:19:09.597273   41488 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0802 18:19:09.597423   41488 command_runner.go:130] > # stream_tls_ca = ""
	I0802 18:19:09.597440   41488 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0802 18:19:09.597502   41488 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0802 18:19:09.597526   41488 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0802 18:19:09.597600   41488 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0802 18:19:09.597614   41488 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0802 18:19:09.597626   41488 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0802 18:19:09.597632   41488 command_runner.go:130] > [crio.runtime]
	I0802 18:19:09.597644   41488 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0802 18:19:09.597657   41488 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0802 18:19:09.597663   41488 command_runner.go:130] > # "nofile=1024:2048"
	I0802 18:19:09.597675   41488 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0802 18:19:09.597728   41488 command_runner.go:130] > # default_ulimits = [
	I0802 18:19:09.597849   41488 command_runner.go:130] > # ]
	I0802 18:19:09.597863   41488 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0802 18:19:09.598066   41488 command_runner.go:130] > # no_pivot = false
	I0802 18:19:09.598084   41488 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0802 18:19:09.598095   41488 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0802 18:19:09.598258   41488 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0802 18:19:09.598278   41488 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0802 18:19:09.598291   41488 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0802 18:19:09.598305   41488 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0802 18:19:09.598393   41488 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0802 18:19:09.598406   41488 command_runner.go:130] > # Cgroup setting for conmon
	I0802 18:19:09.598417   41488 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0802 18:19:09.598515   41488 command_runner.go:130] > conmon_cgroup = "pod"
	I0802 18:19:09.598533   41488 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0802 18:19:09.598542   41488 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0802 18:19:09.598554   41488 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0802 18:19:09.598563   41488 command_runner.go:130] > conmon_env = [
	I0802 18:19:09.598613   41488 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0802 18:19:09.598655   41488 command_runner.go:130] > ]
	I0802 18:19:09.598668   41488 command_runner.go:130] > # Additional environment variables to set for all the
	I0802 18:19:09.598676   41488 command_runner.go:130] > # containers. These are overridden if set in the
	I0802 18:19:09.598688   41488 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0802 18:19:09.598782   41488 command_runner.go:130] > # default_env = [
	I0802 18:19:09.598966   41488 command_runner.go:130] > # ]
	I0802 18:19:09.598976   41488 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0802 18:19:09.598993   41488 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0802 18:19:09.599212   41488 command_runner.go:130] > # selinux = false
	I0802 18:19:09.599228   41488 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0802 18:19:09.599238   41488 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0802 18:19:09.599248   41488 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0802 18:19:09.599399   41488 command_runner.go:130] > # seccomp_profile = ""
	I0802 18:19:09.599414   41488 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0802 18:19:09.599422   41488 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0802 18:19:09.599430   41488 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0802 18:19:09.599437   41488 command_runner.go:130] > # which might increase security.
	I0802 18:19:09.599444   41488 command_runner.go:130] > # This option is currently deprecated,
	I0802 18:19:09.599453   41488 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0802 18:19:09.599522   41488 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0802 18:19:09.599536   41488 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0802 18:19:09.599547   41488 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0802 18:19:09.599561   41488 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0802 18:19:09.599574   41488 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0802 18:19:09.599582   41488 command_runner.go:130] > # This option supports live configuration reload.
	I0802 18:19:09.599771   41488 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0802 18:19:09.599782   41488 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0802 18:19:09.599789   41488 command_runner.go:130] > # the cgroup blockio controller.
	I0802 18:19:09.600019   41488 command_runner.go:130] > # blockio_config_file = ""
	I0802 18:19:09.600033   41488 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0802 18:19:09.600038   41488 command_runner.go:130] > # blockio parameters.
	I0802 18:19:09.600246   41488 command_runner.go:130] > # blockio_reload = false
	I0802 18:19:09.600257   41488 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0802 18:19:09.600272   41488 command_runner.go:130] > # irqbalance daemon.
	I0802 18:19:09.600490   41488 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0802 18:19:09.600499   41488 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0802 18:19:09.600506   41488 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0802 18:19:09.600515   41488 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0802 18:19:09.600746   41488 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0802 18:19:09.600753   41488 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0802 18:19:09.600758   41488 command_runner.go:130] > # This option supports live configuration reload.
	I0802 18:19:09.600963   41488 command_runner.go:130] > # rdt_config_file = ""
	I0802 18:19:09.600976   41488 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0802 18:19:09.601099   41488 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0802 18:19:09.601150   41488 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0802 18:19:09.601278   41488 command_runner.go:130] > # separate_pull_cgroup = ""
	I0802 18:19:09.601288   41488 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0802 18:19:09.601294   41488 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0802 18:19:09.601297   41488 command_runner.go:130] > # will be added.
	I0802 18:19:09.601395   41488 command_runner.go:130] > # default_capabilities = [
	I0802 18:19:09.601525   41488 command_runner.go:130] > # 	"CHOWN",
	I0802 18:19:09.601662   41488 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0802 18:19:09.601795   41488 command_runner.go:130] > # 	"FSETID",
	I0802 18:19:09.601972   41488 command_runner.go:130] > # 	"FOWNER",
	I0802 18:19:09.602069   41488 command_runner.go:130] > # 	"SETGID",
	I0802 18:19:09.602180   41488 command_runner.go:130] > # 	"SETUID",
	I0802 18:19:09.602321   41488 command_runner.go:130] > # 	"SETPCAP",
	I0802 18:19:09.602471   41488 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0802 18:19:09.602603   41488 command_runner.go:130] > # 	"KILL",
	I0802 18:19:09.602720   41488 command_runner.go:130] > # ]
	I0802 18:19:09.602737   41488 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0802 18:19:09.602748   41488 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0802 18:19:09.602986   41488 command_runner.go:130] > # add_inheritable_capabilities = false
	I0802 18:19:09.603000   41488 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0802 18:19:09.603010   41488 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0802 18:19:09.603018   41488 command_runner.go:130] > default_sysctls = [
	I0802 18:19:09.603060   41488 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0802 18:19:09.603146   41488 command_runner.go:130] > ]
	I0802 18:19:09.603254   41488 command_runner.go:130] > # List of devices on the host that a
	I0802 18:19:09.603427   41488 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0802 18:19:09.605162   41488 command_runner.go:130] > # allowed_devices = [
	I0802 18:19:09.605179   41488 command_runner.go:130] > # 	"/dev/fuse",
	I0802 18:19:09.605185   41488 command_runner.go:130] > # ]
	I0802 18:19:09.605194   41488 command_runner.go:130] > # List of additional devices. specified as
	I0802 18:19:09.605206   41488 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0802 18:19:09.605214   41488 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0802 18:19:09.605228   41488 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0802 18:19:09.605238   41488 command_runner.go:130] > # additional_devices = [
	I0802 18:19:09.605243   41488 command_runner.go:130] > # ]
	I0802 18:19:09.605252   41488 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0802 18:19:09.605261   41488 command_runner.go:130] > # cdi_spec_dirs = [
	I0802 18:19:09.605270   41488 command_runner.go:130] > # 	"/etc/cdi",
	I0802 18:19:09.605277   41488 command_runner.go:130] > # 	"/var/run/cdi",
	I0802 18:19:09.605281   41488 command_runner.go:130] > # ]
	I0802 18:19:09.605288   41488 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0802 18:19:09.605300   41488 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0802 18:19:09.605311   41488 command_runner.go:130] > # Defaults to false.
	I0802 18:19:09.605319   41488 command_runner.go:130] > # device_ownership_from_security_context = false
	I0802 18:19:09.605332   41488 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0802 18:19:09.605345   41488 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0802 18:19:09.605354   41488 command_runner.go:130] > # hooks_dir = [
	I0802 18:19:09.605363   41488 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0802 18:19:09.605369   41488 command_runner.go:130] > # ]
	I0802 18:19:09.605376   41488 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0802 18:19:09.605389   41488 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0802 18:19:09.605401   41488 command_runner.go:130] > # its default mounts from the following two files:
	I0802 18:19:09.605409   41488 command_runner.go:130] > #
	I0802 18:19:09.605422   41488 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0802 18:19:09.605437   41488 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0802 18:19:09.605448   41488 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0802 18:19:09.605454   41488 command_runner.go:130] > #
	I0802 18:19:09.605461   41488 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0802 18:19:09.605474   41488 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0802 18:19:09.605487   41488 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0802 18:19:09.605500   41488 command_runner.go:130] > #      only add mounts it finds in this file.
	I0802 18:19:09.605509   41488 command_runner.go:130] > #
	I0802 18:19:09.605520   41488 command_runner.go:130] > # default_mounts_file = ""
	I0802 18:19:09.605529   41488 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0802 18:19:09.605541   41488 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0802 18:19:09.605546   41488 command_runner.go:130] > pids_limit = 1024
	I0802 18:19:09.605556   41488 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0802 18:19:09.605567   41488 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0802 18:19:09.605580   41488 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0802 18:19:09.605592   41488 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0802 18:19:09.605602   41488 command_runner.go:130] > # log_size_max = -1
	I0802 18:19:09.605612   41488 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0802 18:19:09.605620   41488 command_runner.go:130] > # log_to_journald = false
	I0802 18:19:09.605630   41488 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0802 18:19:09.605641   41488 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0802 18:19:09.605650   41488 command_runner.go:130] > # Path to directory for container attach sockets.
	I0802 18:19:09.605661   41488 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0802 18:19:09.605672   41488 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0802 18:19:09.605681   41488 command_runner.go:130] > # bind_mount_prefix = ""
	I0802 18:19:09.605691   41488 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0802 18:19:09.605700   41488 command_runner.go:130] > # read_only = false
	I0802 18:19:09.605709   41488 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0802 18:19:09.605717   41488 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0802 18:19:09.605726   41488 command_runner.go:130] > # live configuration reload.
	I0802 18:19:09.605733   41488 command_runner.go:130] > # log_level = "info"
	I0802 18:19:09.605745   41488 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0802 18:19:09.605756   41488 command_runner.go:130] > # This option supports live configuration reload.
	I0802 18:19:09.605766   41488 command_runner.go:130] > # log_filter = ""
	I0802 18:19:09.605776   41488 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0802 18:19:09.605787   41488 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0802 18:19:09.605794   41488 command_runner.go:130] > # separated by comma.
	I0802 18:19:09.605804   41488 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0802 18:19:09.605821   41488 command_runner.go:130] > # uid_mappings = ""
	I0802 18:19:09.605840   41488 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0802 18:19:09.605862   41488 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0802 18:19:09.605875   41488 command_runner.go:130] > # separated by comma.
	I0802 18:19:09.605895   41488 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0802 18:19:09.605913   41488 command_runner.go:130] > # gid_mappings = ""
	I0802 18:19:09.605928   41488 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0802 18:19:09.605939   41488 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0802 18:19:09.605954   41488 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0802 18:19:09.605962   41488 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0802 18:19:09.605967   41488 command_runner.go:130] > # minimum_mappable_uid = -1
	I0802 18:19:09.605976   41488 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0802 18:19:09.605990   41488 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0802 18:19:09.605999   41488 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0802 18:19:09.606014   41488 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0802 18:19:09.606023   41488 command_runner.go:130] > # minimum_mappable_gid = -1
	I0802 18:19:09.606033   41488 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0802 18:19:09.606045   41488 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0802 18:19:09.606052   41488 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0802 18:19:09.606056   41488 command_runner.go:130] > # ctr_stop_timeout = 30
	I0802 18:19:09.606064   41488 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0802 18:19:09.606077   41488 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0802 18:19:09.606087   41488 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0802 18:19:09.606098   41488 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0802 18:19:09.606105   41488 command_runner.go:130] > drop_infra_ctr = false
	I0802 18:19:09.606113   41488 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0802 18:19:09.606124   41488 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0802 18:19:09.606137   41488 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0802 18:19:09.606146   41488 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0802 18:19:09.606155   41488 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0802 18:19:09.606166   41488 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0802 18:19:09.606176   41488 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0802 18:19:09.606186   41488 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0802 18:19:09.606207   41488 command_runner.go:130] > # shared_cpuset = ""
	I0802 18:19:09.606219   41488 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0802 18:19:09.606230   41488 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0802 18:19:09.606238   41488 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0802 18:19:09.606253   41488 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0802 18:19:09.606264   41488 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0802 18:19:09.606275   41488 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0802 18:19:09.606288   41488 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0802 18:19:09.606295   41488 command_runner.go:130] > # enable_criu_support = false
	I0802 18:19:09.606301   41488 command_runner.go:130] > # Enable/disable the generation of the container,
	I0802 18:19:09.606313   41488 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0802 18:19:09.606324   41488 command_runner.go:130] > # enable_pod_events = false
	I0802 18:19:09.606336   41488 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0802 18:19:09.606348   41488 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0802 18:19:09.606359   41488 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0802 18:19:09.606366   41488 command_runner.go:130] > # default_runtime = "runc"
	I0802 18:19:09.606377   41488 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0802 18:19:09.606387   41488 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0802 18:19:09.606405   41488 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0802 18:19:09.606416   41488 command_runner.go:130] > # creation as a file is not desired either.
	I0802 18:19:09.606429   41488 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0802 18:19:09.606440   41488 command_runner.go:130] > # the hostname is being managed dynamically.
	I0802 18:19:09.606451   41488 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0802 18:19:09.606459   41488 command_runner.go:130] > # ]
	I0802 18:19:09.606467   41488 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0802 18:19:09.606479   41488 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0802 18:19:09.606492   41488 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0802 18:19:09.606503   41488 command_runner.go:130] > # Each entry in the table should follow the format:
	I0802 18:19:09.606511   41488 command_runner.go:130] > #
	I0802 18:19:09.606521   41488 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0802 18:19:09.606531   41488 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0802 18:19:09.606614   41488 command_runner.go:130] > # runtime_type = "oci"
	I0802 18:19:09.606632   41488 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0802 18:19:09.606638   41488 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0802 18:19:09.606645   41488 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0802 18:19:09.606656   41488 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0802 18:19:09.606665   41488 command_runner.go:130] > # monitor_env = []
	I0802 18:19:09.606676   41488 command_runner.go:130] > # privileged_without_host_devices = false
	I0802 18:19:09.606692   41488 command_runner.go:130] > # allowed_annotations = []
	I0802 18:19:09.606704   41488 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0802 18:19:09.606712   41488 command_runner.go:130] > # Where:
	I0802 18:19:09.606722   41488 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0802 18:19:09.606728   41488 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0802 18:19:09.606741   41488 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0802 18:19:09.606754   41488 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0802 18:19:09.606763   41488 command_runner.go:130] > #   in $PATH.
	I0802 18:19:09.606774   41488 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0802 18:19:09.606787   41488 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0802 18:19:09.606796   41488 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0802 18:19:09.606801   41488 command_runner.go:130] > #   state.
	I0802 18:19:09.606808   41488 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0802 18:19:09.606814   41488 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0802 18:19:09.606822   41488 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0802 18:19:09.606831   41488 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0802 18:19:09.606844   41488 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0802 18:19:09.606854   41488 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0802 18:19:09.606861   41488 command_runner.go:130] > #   The currently recognized values are:
	I0802 18:19:09.606872   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0802 18:19:09.606886   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0802 18:19:09.606895   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0802 18:19:09.606903   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0802 18:19:09.606917   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0802 18:19:09.606931   41488 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0802 18:19:09.606943   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0802 18:19:09.606958   41488 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0802 18:19:09.606970   41488 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0802 18:19:09.606983   41488 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0802 18:19:09.606992   41488 command_runner.go:130] > #   deprecated option "conmon".
	I0802 18:19:09.607006   41488 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0802 18:19:09.607017   41488 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0802 18:19:09.607031   41488 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0802 18:19:09.607041   41488 command_runner.go:130] > #   should be moved to the container's cgroup
	I0802 18:19:09.607055   41488 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0802 18:19:09.607063   41488 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0802 18:19:09.607078   41488 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0802 18:19:09.607091   41488 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0802 18:19:09.607109   41488 command_runner.go:130] > #
	I0802 18:19:09.607119   41488 command_runner.go:130] > # Using the seccomp notifier feature:
	I0802 18:19:09.607127   41488 command_runner.go:130] > #
	I0802 18:19:09.607137   41488 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0802 18:19:09.607150   41488 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0802 18:19:09.607157   41488 command_runner.go:130] > #
	I0802 18:19:09.607167   41488 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0802 18:19:09.607180   41488 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0802 18:19:09.607188   41488 command_runner.go:130] > #
	I0802 18:19:09.607198   41488 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0802 18:19:09.607207   41488 command_runner.go:130] > # feature.
	I0802 18:19:09.607215   41488 command_runner.go:130] > #
	I0802 18:19:09.607226   41488 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0802 18:19:09.607234   41488 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0802 18:19:09.607247   41488 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0802 18:19:09.607260   41488 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0802 18:19:09.607271   41488 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0802 18:19:09.607279   41488 command_runner.go:130] > #
	I0802 18:19:09.607288   41488 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0802 18:19:09.607301   41488 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0802 18:19:09.607308   41488 command_runner.go:130] > #
	I0802 18:19:09.607313   41488 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0802 18:19:09.607323   41488 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0802 18:19:09.607331   41488 command_runner.go:130] > #
	I0802 18:19:09.607341   41488 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0802 18:19:09.607353   41488 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0802 18:19:09.607361   41488 command_runner.go:130] > # limitation.
	I0802 18:19:09.607368   41488 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0802 18:19:09.607377   41488 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0802 18:19:09.607386   41488 command_runner.go:130] > runtime_type = "oci"
	I0802 18:19:09.607394   41488 command_runner.go:130] > runtime_root = "/run/runc"
	I0802 18:19:09.607398   41488 command_runner.go:130] > runtime_config_path = ""
	I0802 18:19:09.607406   41488 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0802 18:19:09.607416   41488 command_runner.go:130] > monitor_cgroup = "pod"
	I0802 18:19:09.607433   41488 command_runner.go:130] > monitor_exec_cgroup = ""
	I0802 18:19:09.607442   41488 command_runner.go:130] > monitor_env = [
	I0802 18:19:09.607454   41488 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0802 18:19:09.607461   41488 command_runner.go:130] > ]
	I0802 18:19:09.607468   41488 command_runner.go:130] > privileged_without_host_devices = false
	I0802 18:19:09.607480   41488 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0802 18:19:09.607487   41488 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0802 18:19:09.607499   41488 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0802 18:19:09.607515   41488 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0802 18:19:09.607529   41488 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0802 18:19:09.607540   41488 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0802 18:19:09.607557   41488 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0802 18:19:09.607568   41488 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0802 18:19:09.607579   41488 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0802 18:19:09.607591   41488 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0802 18:19:09.607600   41488 command_runner.go:130] > # Example:
	I0802 18:19:09.607608   41488 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0802 18:19:09.607615   41488 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0802 18:19:09.607622   41488 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0802 18:19:09.607634   41488 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0802 18:19:09.607639   41488 command_runner.go:130] > # cpuset = 0
	I0802 18:19:09.607645   41488 command_runner.go:130] > # cpushares = "0-1"
	I0802 18:19:09.607649   41488 command_runner.go:130] > # Where:
	I0802 18:19:09.607654   41488 command_runner.go:130] > # The workload name is workload-type.
	I0802 18:19:09.607662   41488 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0802 18:19:09.607671   41488 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0802 18:19:09.607680   41488 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0802 18:19:09.607691   41488 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0802 18:19:09.607700   41488 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0802 18:19:09.607708   41488 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0802 18:19:09.607717   41488 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0802 18:19:09.607782   41488 command_runner.go:130] > # Default value is set to true
	I0802 18:19:09.607843   41488 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0802 18:19:09.607858   41488 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0802 18:19:09.607873   41488 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0802 18:19:09.607882   41488 command_runner.go:130] > # Default value is set to 'false'
	I0802 18:19:09.607907   41488 command_runner.go:130] > # disable_hostport_mapping = false
	I0802 18:19:09.607921   41488 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0802 18:19:09.607929   41488 command_runner.go:130] > #
	I0802 18:19:09.607940   41488 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0802 18:19:09.607962   41488 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0802 18:19:09.607974   41488 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0802 18:19:09.607987   41488 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0802 18:19:09.608000   41488 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0802 18:19:09.608009   41488 command_runner.go:130] > [crio.image]
	I0802 18:19:09.608019   41488 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0802 18:19:09.608029   41488 command_runner.go:130] > # default_transport = "docker://"
	I0802 18:19:09.608041   41488 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0802 18:19:09.608053   41488 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0802 18:19:09.608060   41488 command_runner.go:130] > # global_auth_file = ""
	I0802 18:19:09.608067   41488 command_runner.go:130] > # The image used to instantiate infra containers.
	I0802 18:19:09.608078   41488 command_runner.go:130] > # This option supports live configuration reload.
	I0802 18:19:09.608090   41488 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0802 18:19:09.608103   41488 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0802 18:19:09.608115   41488 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0802 18:19:09.608126   41488 command_runner.go:130] > # This option supports live configuration reload.
	I0802 18:19:09.608135   41488 command_runner.go:130] > # pause_image_auth_file = ""
	I0802 18:19:09.608144   41488 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0802 18:19:09.608152   41488 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0802 18:19:09.608165   41488 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0802 18:19:09.608177   41488 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0802 18:19:09.608187   41488 command_runner.go:130] > # pause_command = "/pause"
	I0802 18:19:09.608199   41488 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0802 18:19:09.608210   41488 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0802 18:19:09.608222   41488 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0802 18:19:09.608230   41488 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0802 18:19:09.608242   41488 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0802 18:19:09.608256   41488 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0802 18:19:09.608265   41488 command_runner.go:130] > # pinned_images = [
	I0802 18:19:09.608273   41488 command_runner.go:130] > # ]
	I0802 18:19:09.608282   41488 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0802 18:19:09.608295   41488 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0802 18:19:09.608312   41488 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0802 18:19:09.608324   41488 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0802 18:19:09.608335   41488 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0802 18:19:09.608358   41488 command_runner.go:130] > # signature_policy = ""
	I0802 18:19:09.608369   41488 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0802 18:19:09.608387   41488 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0802 18:19:09.608397   41488 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0802 18:19:09.608407   41488 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0802 18:19:09.608419   41488 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0802 18:19:09.608431   41488 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0802 18:19:09.608443   41488 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0802 18:19:09.608460   41488 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0802 18:19:09.608469   41488 command_runner.go:130] > # changing them here.
	I0802 18:19:09.608478   41488 command_runner.go:130] > # insecure_registries = [
	I0802 18:19:09.608484   41488 command_runner.go:130] > # ]
	I0802 18:19:09.608493   41488 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0802 18:19:09.608505   41488 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0802 18:19:09.608514   41488 command_runner.go:130] > # image_volumes = "mkdir"
	I0802 18:19:09.608526   41488 command_runner.go:130] > # Temporary directory to use for storing big files
	I0802 18:19:09.608537   41488 command_runner.go:130] > # big_files_temporary_dir = ""
	I0802 18:19:09.608549   41488 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0802 18:19:09.608558   41488 command_runner.go:130] > # CNI plugins.
	I0802 18:19:09.608565   41488 command_runner.go:130] > [crio.network]
	I0802 18:19:09.608571   41488 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0802 18:19:09.608582   41488 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0802 18:19:09.608592   41488 command_runner.go:130] > # cni_default_network = ""
	I0802 18:19:09.608603   41488 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0802 18:19:09.608614   41488 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0802 18:19:09.608626   41488 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0802 18:19:09.608634   41488 command_runner.go:130] > # plugin_dirs = [
	I0802 18:19:09.608643   41488 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0802 18:19:09.608651   41488 command_runner.go:130] > # ]
	I0802 18:19:09.608660   41488 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0802 18:19:09.608664   41488 command_runner.go:130] > [crio.metrics]
	I0802 18:19:09.608672   41488 command_runner.go:130] > # Globally enable or disable metrics support.
	I0802 18:19:09.608682   41488 command_runner.go:130] > enable_metrics = true
	I0802 18:19:09.608695   41488 command_runner.go:130] > # Specify enabled metrics collectors.
	I0802 18:19:09.608705   41488 command_runner.go:130] > # Per default all metrics are enabled.
	I0802 18:19:09.608719   41488 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0802 18:19:09.608731   41488 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0802 18:19:09.608742   41488 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0802 18:19:09.608749   41488 command_runner.go:130] > # metrics_collectors = [
	I0802 18:19:09.608753   41488 command_runner.go:130] > # 	"operations",
	I0802 18:19:09.608761   41488 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0802 18:19:09.608771   41488 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0802 18:19:09.608781   41488 command_runner.go:130] > # 	"operations_errors",
	I0802 18:19:09.608790   41488 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0802 18:19:09.608801   41488 command_runner.go:130] > # 	"image_pulls_by_name",
	I0802 18:19:09.608810   41488 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0802 18:19:09.608820   41488 command_runner.go:130] > # 	"image_pulls_failures",
	I0802 18:19:09.608827   41488 command_runner.go:130] > # 	"image_pulls_successes",
	I0802 18:19:09.608833   41488 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0802 18:19:09.608838   41488 command_runner.go:130] > # 	"image_layer_reuse",
	I0802 18:19:09.608848   41488 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0802 18:19:09.608858   41488 command_runner.go:130] > # 	"containers_oom_total",
	I0802 18:19:09.608867   41488 command_runner.go:130] > # 	"containers_oom",
	I0802 18:19:09.608877   41488 command_runner.go:130] > # 	"processes_defunct",
	I0802 18:19:09.608886   41488 command_runner.go:130] > # 	"operations_total",
	I0802 18:19:09.608896   41488 command_runner.go:130] > # 	"operations_latency_seconds",
	I0802 18:19:09.608906   41488 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0802 18:19:09.608914   41488 command_runner.go:130] > # 	"operations_errors_total",
	I0802 18:19:09.608919   41488 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0802 18:19:09.608927   41488 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0802 18:19:09.608934   41488 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0802 18:19:09.608944   41488 command_runner.go:130] > # 	"image_pulls_success_total",
	I0802 18:19:09.608954   41488 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0802 18:19:09.608964   41488 command_runner.go:130] > # 	"containers_oom_count_total",
	I0802 18:19:09.608975   41488 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0802 18:19:09.608985   41488 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0802 18:19:09.608992   41488 command_runner.go:130] > # ]
	I0802 18:19:09.609001   41488 command_runner.go:130] > # The port on which the metrics server will listen.
	I0802 18:19:09.609006   41488 command_runner.go:130] > # metrics_port = 9090
	I0802 18:19:09.609014   41488 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0802 18:19:09.609024   41488 command_runner.go:130] > # metrics_socket = ""
	I0802 18:19:09.609035   41488 command_runner.go:130] > # The certificate for the secure metrics server.
	I0802 18:19:09.609048   41488 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0802 18:19:09.609065   41488 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0802 18:19:09.609075   41488 command_runner.go:130] > # certificate on any modification event.
	I0802 18:19:09.609083   41488 command_runner.go:130] > # metrics_cert = ""
	I0802 18:19:09.609088   41488 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0802 18:19:09.609096   41488 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0802 18:19:09.609105   41488 command_runner.go:130] > # metrics_key = ""
	I0802 18:19:09.609118   41488 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0802 18:19:09.609127   41488 command_runner.go:130] > [crio.tracing]
	I0802 18:19:09.609138   41488 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0802 18:19:09.609148   41488 command_runner.go:130] > # enable_tracing = false
	I0802 18:19:09.609160   41488 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0802 18:19:09.609168   41488 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0802 18:19:09.609174   41488 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0802 18:19:09.609186   41488 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0802 18:19:09.609196   41488 command_runner.go:130] > # CRI-O NRI configuration.
	I0802 18:19:09.609204   41488 command_runner.go:130] > [crio.nri]
	I0802 18:19:09.609214   41488 command_runner.go:130] > # Globally enable or disable NRI.
	I0802 18:19:09.609223   41488 command_runner.go:130] > # enable_nri = false
	I0802 18:19:09.609230   41488 command_runner.go:130] > # NRI socket to listen on.
	I0802 18:19:09.609237   41488 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0802 18:19:09.609246   41488 command_runner.go:130] > # NRI plugin directory to use.
	I0802 18:19:09.609255   41488 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0802 18:19:09.609260   41488 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0802 18:19:09.609270   41488 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0802 18:19:09.609281   41488 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0802 18:19:09.609292   41488 command_runner.go:130] > # nri_disable_connections = false
	I0802 18:19:09.609304   41488 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0802 18:19:09.609314   41488 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0802 18:19:09.609325   41488 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0802 18:19:09.609334   41488 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0802 18:19:09.609347   41488 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0802 18:19:09.609353   41488 command_runner.go:130] > [crio.stats]
	I0802 18:19:09.609359   41488 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0802 18:19:09.609366   41488 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0802 18:19:09.609373   41488 command_runner.go:130] > # stats_collection_period = 0
	I0802 18:19:09.609416   41488 command_runner.go:130] ! time="2024-08-02 18:19:09.549271871Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0802 18:19:09.609437   41488 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0802 18:19:09.609566   41488 cni.go:84] Creating CNI manager for ""
	I0802 18:19:09.609583   41488 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0802 18:19:09.609595   41488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:19:09.609615   41488 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.67 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-250383 NodeName:multinode-250383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.67"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 18:19:09.609746   41488 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-250383"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.67"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:19:09.609813   41488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 18:19:09.619540   41488 command_runner.go:130] > kubeadm
	I0802 18:19:09.619558   41488 command_runner.go:130] > kubectl
	I0802 18:19:09.619563   41488 command_runner.go:130] > kubelet
	I0802 18:19:09.619578   41488 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:19:09.619627   41488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:19:09.628735   41488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0802 18:19:09.644275   41488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:19:09.659415   41488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0802 18:19:09.676196   41488 ssh_runner.go:195] Run: grep 192.168.39.67	control-plane.minikube.internal$ /etc/hosts
	I0802 18:19:09.679804   41488 command_runner.go:130] > 192.168.39.67	control-plane.minikube.internal
	I0802 18:19:09.680094   41488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:19:09.815507   41488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:19:09.829363   41488 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383 for IP: 192.168.39.67
	I0802 18:19:09.829393   41488 certs.go:194] generating shared ca certs ...
	I0802 18:19:09.829409   41488 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:19:09.829569   41488 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:19:09.829606   41488 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:19:09.829615   41488 certs.go:256] generating profile certs ...
	I0802 18:19:09.829698   41488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/client.key
	I0802 18:19:09.829781   41488 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/apiserver.key.1086a566
	I0802 18:19:09.829828   41488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/proxy-client.key
	I0802 18:19:09.829839   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0802 18:19:09.829850   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0802 18:19:09.829861   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0802 18:19:09.829874   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0802 18:19:09.829884   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0802 18:19:09.829899   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0802 18:19:09.829910   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0802 18:19:09.829920   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0802 18:19:09.829975   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:19:09.830003   41488 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:19:09.830011   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:19:09.830035   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:19:09.830059   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:19:09.830080   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:19:09.830141   41488 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:19:09.830169   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem -> /usr/share/ca-certificates/12547.pem
	I0802 18:19:09.830182   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> /usr/share/ca-certificates/125472.pem
	I0802 18:19:09.830195   41488 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:19:09.830749   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:19:09.853563   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:19:09.875798   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:19:09.898398   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:19:09.921220   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0802 18:19:09.943138   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 18:19:09.965160   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:19:09.986964   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/multinode-250383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 18:19:10.008699   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:19:10.030329   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:19:10.052571   41488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:19:10.074632   41488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:19:10.089579   41488 ssh_runner.go:195] Run: openssl version
	I0802 18:19:10.094917   41488 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0802 18:19:10.095056   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:19:10.106360   41488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:19:10.110360   41488 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:19:10.110498   41488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:19:10.110597   41488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:19:10.115911   41488 command_runner.go:130] > b5213941
	I0802 18:19:10.115978   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:19:10.126009   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:19:10.137826   41488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:19:10.141890   41488 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:19:10.141920   41488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:19:10.141968   41488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:19:10.147077   41488 command_runner.go:130] > 51391683
	I0802 18:19:10.147155   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:19:10.155881   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:19:10.165723   41488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:19:10.169878   41488 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:19:10.170116   41488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:19:10.170166   41488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:19:10.175194   41488 command_runner.go:130] > 3ec20f2e
	I0802 18:19:10.175268   41488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:19:10.184192   41488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:19:10.188292   41488 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:19:10.188311   41488 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0802 18:19:10.188316   41488 command_runner.go:130] > Device: 253,1	Inode: 1056811     Links: 1
	I0802 18:19:10.188323   41488 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0802 18:19:10.188331   41488 command_runner.go:130] > Access: 2024-08-02 18:12:09.250230645 +0000
	I0802 18:19:10.188336   41488 command_runner.go:130] > Modify: 2024-08-02 18:12:09.250230645 +0000
	I0802 18:19:10.188343   41488 command_runner.go:130] > Change: 2024-08-02 18:12:09.250230645 +0000
	I0802 18:19:10.188350   41488 command_runner.go:130] >  Birth: 2024-08-02 18:12:09.250230645 +0000
	I0802 18:19:10.188400   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 18:19:10.193451   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.193589   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 18:19:10.198724   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.198774   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 18:19:10.203701   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.203864   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 18:19:10.209040   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.209158   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 18:19:10.214779   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.214823   41488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 18:19:10.219933   41488 command_runner.go:130] > Certificate will not expire
	I0802 18:19:10.220202   41488 kubeadm.go:392] StartCluster: {Name:multinode-250383 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-250383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.67 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.114 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.172 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:19:10.220298   41488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:19:10.220369   41488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:19:10.255092   41488 command_runner.go:130] > 9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5
	I0802 18:19:10.255141   41488 command_runner.go:130] > 2a11e0f9813bcbd6e4131452b86d24f9202bbf7bce3a8f936eeed2294fedeb9c
	I0802 18:19:10.255151   41488 command_runner.go:130] > 595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907
	I0802 18:19:10.255161   41488 command_runner.go:130] > b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79
	I0802 18:19:10.255170   41488 command_runner.go:130] > 4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee
	I0802 18:19:10.255176   41488 command_runner.go:130] > bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91
	I0802 18:19:10.255181   41488 command_runner.go:130] > 995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a
	I0802 18:19:10.255198   41488 command_runner.go:130] > 98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b
	I0802 18:19:10.255210   41488 command_runner.go:130] > e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd
	I0802 18:19:10.255237   41488 cri.go:89] found id: "9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5"
	I0802 18:19:10.255250   41488 cri.go:89] found id: "2a11e0f9813bcbd6e4131452b86d24f9202bbf7bce3a8f936eeed2294fedeb9c"
	I0802 18:19:10.255255   41488 cri.go:89] found id: "595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907"
	I0802 18:19:10.255260   41488 cri.go:89] found id: "b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79"
	I0802 18:19:10.255267   41488 cri.go:89] found id: "4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee"
	I0802 18:19:10.255272   41488 cri.go:89] found id: "bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91"
	I0802 18:19:10.255276   41488 cri.go:89] found id: "995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a"
	I0802 18:19:10.255282   41488 cri.go:89] found id: "98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b"
	I0802 18:19:10.255285   41488 cri.go:89] found id: "e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd"
	I0802 18:19:10.255292   41488 cri.go:89] found id: ""
	I0802 18:19:10.255344   41488 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.049042695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623001049018288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6538c9b4-25c4-418c-91cb-191b2ac2e6b6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.049528578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ceef3441-efef-465e-9a63-43864cadcec4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.049606557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ceef3441-efef-465e-9a63-43864cadcec4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.049986288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f825d06a22a7497d863561bd27b24d21c155e3a124e0af0dfd33603c28804657,PodSandboxId:835e9f0282b33c8f52be2dcdafea6357b48e992c25b83d1cb06f383fb28d9b36,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722622790017107154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939adbf73795bd9c2b2c0a4641f696c845801352849cede09ab386e4bb05cc,PodSandboxId:db7b4c3cee33edb87f1a23b3e1d154e27db48ff95b8fe8345f32781beaedff9b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722622756416280084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26eebc9ebbff1976ff7d1e06136733e5480d90e28bfe93063a2e4a07ca42988f,PodSandboxId:c9939974839ae48b8443bc5a771f071aa4edfff5c19b7917d2547c87ca79b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722622756392925019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb2a64ca9fbd58114c40aa07ba1e6fd707f64160e285c92e3044db332a91562,PodSandboxId:aad616f4031406d4fe2399ad3b7c6d7e85877f023be155196c30f1f20b42366c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722622756364688720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beab6760bae27ed786434fe87ddd0db2a2b31ec1f142098ff4e0591d217b033c,PodSandboxId:62a3d90a23f3ba992c72653fbe24b4c543c204238cf17bd81ed10965a7ee9c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722622756290518145,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceac5df534eaa7f8cee9a49da8430b46c9228e0609dede1e2d195b1a6234af6,PodSandboxId:6bc745953bc49e54448e26cf949a38c489eee74b854b6178fe7ec2d9a158cb18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722622752462987897,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbb8f6618e46a91e9ff90c4351c77c97371f0b25a3189239891e0b0777810d7,PodSandboxId:ca6032e84f4e1dc0fdc49b2a11be1c9f132e1cd422dc7f825d86b0b9f5510577,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722622752484835259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1225b7f1b1c1f6b63bb479e019756883806da897058b865c00bb76257a5f4b6f,PodSandboxId:c45c22fcbbaeadef4286655e041db9d66af9d094405f3acedd73090d23b6909f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722622752478763068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5c6261a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2e98aabffd740ba129f2df09f3383baf6f2135ff8bf660d0af74a6a08e7aa9,PodSandboxId:f316c337fc45552bd2c66d758e91d2b0ded8f47d7c7e880171779ba77614b485,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722622752422321142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotations:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5,PodSandboxId:01171b0fa1c4615d234526b92702f2192ccdf252a3fb8fb35ff274c960dc7dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722622738915254445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080296105c460adefe61e5eb38ac79a48fa159d76ec689ef1e2e991d54b8daa4,PodSandboxId:e9f1315c6d6031ca77ef47faef093111cc8f6b7232f145e132cd39f2888a59d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722622420917421352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907,PodSandboxId:aca3fcdb5ef7d0f65f30a18d57db8828bf02b49801ea77e57780b88b7969f3dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722622367670751062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79,PodSandboxId:3a0fc305ccb27f8de61466e9095e179073cc71810ad3b67d08d36a4735e03c0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722622355734818866,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee,PodSandboxId:8acb9191287bb74c85245ae5dd4020f348c043b48f779d174b149327f42ac1cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722622352092691925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91,PodSandboxId:34c16d3eea7b3cd4362b3047c069a573c9a4d5df466ecd8216730bb0dc1e4978,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722622332401646173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a,PodSandboxId:493457b81a9b33bb2f456335a803dc0a849d461f7985091cd5de0e403999e4d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722622332392128969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotation
s:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b,PodSandboxId:e79a3a9f456f791d78cdae09e3969abefaf7dd434d0b764ec3b94af04419be51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722622332340723312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd,PodSandboxId:45ee7a236c1aa73dd926a6dc514ff2ecf91fe25923cc2978dcde448c7c12ec1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722622332340656756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ceef3441-efef-465e-9a63-43864cadcec4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.090733881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13290234-9fd4-446a-9a41-a1b736bb0604 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.090843682Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13290234-9fd4-446a-9a41-a1b736bb0604 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.094489426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ffd939f5-fb42-4213-8a5a-d172249f9db2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.094943603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623001094917061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ffd939f5-fb42-4213-8a5a-d172249f9db2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.096085412Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5841e5a-a046-4726-9774-e6561f55dc2c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.096163027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5841e5a-a046-4726-9774-e6561f55dc2c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.096585400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f825d06a22a7497d863561bd27b24d21c155e3a124e0af0dfd33603c28804657,PodSandboxId:835e9f0282b33c8f52be2dcdafea6357b48e992c25b83d1cb06f383fb28d9b36,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722622790017107154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939adbf73795bd9c2b2c0a4641f696c845801352849cede09ab386e4bb05cc,PodSandboxId:db7b4c3cee33edb87f1a23b3e1d154e27db48ff95b8fe8345f32781beaedff9b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722622756416280084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26eebc9ebbff1976ff7d1e06136733e5480d90e28bfe93063a2e4a07ca42988f,PodSandboxId:c9939974839ae48b8443bc5a771f071aa4edfff5c19b7917d2547c87ca79b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722622756392925019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb2a64ca9fbd58114c40aa07ba1e6fd707f64160e285c92e3044db332a91562,PodSandboxId:aad616f4031406d4fe2399ad3b7c6d7e85877f023be155196c30f1f20b42366c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722622756364688720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beab6760bae27ed786434fe87ddd0db2a2b31ec1f142098ff4e0591d217b033c,PodSandboxId:62a3d90a23f3ba992c72653fbe24b4c543c204238cf17bd81ed10965a7ee9c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722622756290518145,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceac5df534eaa7f8cee9a49da8430b46c9228e0609dede1e2d195b1a6234af6,PodSandboxId:6bc745953bc49e54448e26cf949a38c489eee74b854b6178fe7ec2d9a158cb18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722622752462987897,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbb8f6618e46a91e9ff90c4351c77c97371f0b25a3189239891e0b0777810d7,PodSandboxId:ca6032e84f4e1dc0fdc49b2a11be1c9f132e1cd422dc7f825d86b0b9f5510577,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722622752484835259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1225b7f1b1c1f6b63bb479e019756883806da897058b865c00bb76257a5f4b6f,PodSandboxId:c45c22fcbbaeadef4286655e041db9d66af9d094405f3acedd73090d23b6909f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722622752478763068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5c6261a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2e98aabffd740ba129f2df09f3383baf6f2135ff8bf660d0af74a6a08e7aa9,PodSandboxId:f316c337fc45552bd2c66d758e91d2b0ded8f47d7c7e880171779ba77614b485,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722622752422321142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotations:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5,PodSandboxId:01171b0fa1c4615d234526b92702f2192ccdf252a3fb8fb35ff274c960dc7dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722622738915254445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080296105c460adefe61e5eb38ac79a48fa159d76ec689ef1e2e991d54b8daa4,PodSandboxId:e9f1315c6d6031ca77ef47faef093111cc8f6b7232f145e132cd39f2888a59d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722622420917421352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907,PodSandboxId:aca3fcdb5ef7d0f65f30a18d57db8828bf02b49801ea77e57780b88b7969f3dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722622367670751062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79,PodSandboxId:3a0fc305ccb27f8de61466e9095e179073cc71810ad3b67d08d36a4735e03c0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722622355734818866,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee,PodSandboxId:8acb9191287bb74c85245ae5dd4020f348c043b48f779d174b149327f42ac1cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722622352092691925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91,PodSandboxId:34c16d3eea7b3cd4362b3047c069a573c9a4d5df466ecd8216730bb0dc1e4978,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722622332401646173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a,PodSandboxId:493457b81a9b33bb2f456335a803dc0a849d461f7985091cd5de0e403999e4d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722622332392128969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotation
s:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b,PodSandboxId:e79a3a9f456f791d78cdae09e3969abefaf7dd434d0b764ec3b94af04419be51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722622332340723312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd,PodSandboxId:45ee7a236c1aa73dd926a6dc514ff2ecf91fe25923cc2978dcde448c7c12ec1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722622332340656756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5841e5a-a046-4726-9774-e6561f55dc2c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.144568967Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a85c8e4-0dae-494d-817c-a499d0f5e214 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.144654214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a85c8e4-0dae-494d-817c-a499d0f5e214 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.145544104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0191557-9df6-40ec-8201-e8dc19c64d8e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.146108553Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623001146049579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0191557-9df6-40ec-8201-e8dc19c64d8e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.146524854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9898fb48-493e-4174-9875-c0fa5a51d646 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.146591481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9898fb48-493e-4174-9875-c0fa5a51d646 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.146946395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f825d06a22a7497d863561bd27b24d21c155e3a124e0af0dfd33603c28804657,PodSandboxId:835e9f0282b33c8f52be2dcdafea6357b48e992c25b83d1cb06f383fb28d9b36,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722622790017107154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939adbf73795bd9c2b2c0a4641f696c845801352849cede09ab386e4bb05cc,PodSandboxId:db7b4c3cee33edb87f1a23b3e1d154e27db48ff95b8fe8345f32781beaedff9b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722622756416280084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26eebc9ebbff1976ff7d1e06136733e5480d90e28bfe93063a2e4a07ca42988f,PodSandboxId:c9939974839ae48b8443bc5a771f071aa4edfff5c19b7917d2547c87ca79b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722622756392925019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb2a64ca9fbd58114c40aa07ba1e6fd707f64160e285c92e3044db332a91562,PodSandboxId:aad616f4031406d4fe2399ad3b7c6d7e85877f023be155196c30f1f20b42366c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722622756364688720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beab6760bae27ed786434fe87ddd0db2a2b31ec1f142098ff4e0591d217b033c,PodSandboxId:62a3d90a23f3ba992c72653fbe24b4c543c204238cf17bd81ed10965a7ee9c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722622756290518145,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceac5df534eaa7f8cee9a49da8430b46c9228e0609dede1e2d195b1a6234af6,PodSandboxId:6bc745953bc49e54448e26cf949a38c489eee74b854b6178fe7ec2d9a158cb18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722622752462987897,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbb8f6618e46a91e9ff90c4351c77c97371f0b25a3189239891e0b0777810d7,PodSandboxId:ca6032e84f4e1dc0fdc49b2a11be1c9f132e1cd422dc7f825d86b0b9f5510577,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722622752484835259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1225b7f1b1c1f6b63bb479e019756883806da897058b865c00bb76257a5f4b6f,PodSandboxId:c45c22fcbbaeadef4286655e041db9d66af9d094405f3acedd73090d23b6909f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722622752478763068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5c6261a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2e98aabffd740ba129f2df09f3383baf6f2135ff8bf660d0af74a6a08e7aa9,PodSandboxId:f316c337fc45552bd2c66d758e91d2b0ded8f47d7c7e880171779ba77614b485,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722622752422321142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotations:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5,PodSandboxId:01171b0fa1c4615d234526b92702f2192ccdf252a3fb8fb35ff274c960dc7dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722622738915254445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080296105c460adefe61e5eb38ac79a48fa159d76ec689ef1e2e991d54b8daa4,PodSandboxId:e9f1315c6d6031ca77ef47faef093111cc8f6b7232f145e132cd39f2888a59d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722622420917421352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907,PodSandboxId:aca3fcdb5ef7d0f65f30a18d57db8828bf02b49801ea77e57780b88b7969f3dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722622367670751062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79,PodSandboxId:3a0fc305ccb27f8de61466e9095e179073cc71810ad3b67d08d36a4735e03c0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722622355734818866,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee,PodSandboxId:8acb9191287bb74c85245ae5dd4020f348c043b48f779d174b149327f42ac1cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722622352092691925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91,PodSandboxId:34c16d3eea7b3cd4362b3047c069a573c9a4d5df466ecd8216730bb0dc1e4978,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722622332401646173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a,PodSandboxId:493457b81a9b33bb2f456335a803dc0a849d461f7985091cd5de0e403999e4d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722622332392128969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotation
s:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b,PodSandboxId:e79a3a9f456f791d78cdae09e3969abefaf7dd434d0b764ec3b94af04419be51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722622332340723312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd,PodSandboxId:45ee7a236c1aa73dd926a6dc514ff2ecf91fe25923cc2978dcde448c7c12ec1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722622332340656756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9898fb48-493e-4174-9875-c0fa5a51d646 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.191675798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=abe4808d-ff11-4533-ada9-b9f553ab61a5 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.191760165Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=abe4808d-ff11-4533-ada9-b9f553ab61a5 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.193457202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8385286-ff59-4460-821a-e79c5b06c3e8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.194172530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623001194148429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8385286-ff59-4460-821a-e79c5b06c3e8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.195270413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb9d8bf5-7192-4ca5-bb53-be3416550ee9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.195343319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb9d8bf5-7192-4ca5-bb53-be3416550ee9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:23:21 multinode-250383 crio[2961]: time="2024-08-02 18:23:21.195760478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f825d06a22a7497d863561bd27b24d21c155e3a124e0af0dfd33603c28804657,PodSandboxId:835e9f0282b33c8f52be2dcdafea6357b48e992c25b83d1cb06f383fb28d9b36,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722622790017107154,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c939adbf73795bd9c2b2c0a4641f696c845801352849cede09ab386e4bb05cc,PodSandboxId:db7b4c3cee33edb87f1a23b3e1d154e27db48ff95b8fe8345f32781beaedff9b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722622756416280084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26eebc9ebbff1976ff7d1e06136733e5480d90e28bfe93063a2e4a07ca42988f,PodSandboxId:c9939974839ae48b8443bc5a771f071aa4edfff5c19b7917d2547c87ca79b738,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722622756392925019,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbb2a64ca9fbd58114c40aa07ba1e6fd707f64160e285c92e3044db332a91562,PodSandboxId:aad616f4031406d4fe2399ad3b7c6d7e85877f023be155196c30f1f20b42366c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722622756364688720,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fi
le,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beab6760bae27ed786434fe87ddd0db2a2b31ec1f142098ff4e0591d217b033c,PodSandboxId:62a3d90a23f3ba992c72653fbe24b4c543c204238cf17bd81ed10965a7ee9c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722622756290518145,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerP
ort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aceac5df534eaa7f8cee9a49da8430b46c9228e0609dede1e2d195b1a6234af6,PodSandboxId:6bc745953bc49e54448e26cf949a38c489eee74b854b6178fe7ec2d9a158cb18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722622752462987897,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cbb8f6618e46a91e9ff90c4351c77c97371f0b25a3189239891e0b0777810d7,PodSandboxId:ca6032e84f4e1dc0fdc49b2a11be1c9f132e1cd422dc7f825d86b0b9f5510577,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722622752484835259,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kub
ernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1225b7f1b1c1f6b63bb479e019756883806da897058b865c00bb76257a5f4b6f,PodSandboxId:c45c22fcbbaeadef4286655e041db9d66af9d094405f3acedd73090d23b6909f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722622752478763068,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 5c6261a3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d2e98aabffd740ba129f2df09f3383baf6f2135ff8bf660d0af74a6a08e7aa9,PodSandboxId:f316c337fc45552bd2c66d758e91d2b0ded8f47d7c7e880171779ba77614b485,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722622752422321142,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotations:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5,PodSandboxId:01171b0fa1c4615d234526b92702f2192ccdf252a3fb8fb35ff274c960dc7dec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722622738915254445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p22xc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b262e69d-3b94-44ce-aae2-f309fece26ab,},Annotations:map[string]string{io.kubernetes.container.hash: f90a3f8b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080296105c460adefe61e5eb38ac79a48fa159d76ec689ef1e2e991d54b8daa4,PodSandboxId:e9f1315c6d6031ca77ef47faef093111cc8f6b7232f145e132cd39f2888a59d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722622420917421352,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6vqf8,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e30d8939-3bac-44f5-9d29-1b79a4e40748,},Annotations:map[string]string{io.kubernetes.container.hash: 5b1523a1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:595e69fd3041aa648bcab23659f0fade10b799ecbb0bf8473322138da8235907,PodSandboxId:aca3fcdb5ef7d0f65f30a18d57db8828bf02b49801ea77e57780b88b7969f3dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722622367670751062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 816ce1dc-8f89-4c43-bdaf-6916dc76f56d,},Annotations:map[string]string{io.kubernetes.container.hash: df532b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79,PodSandboxId:3a0fc305ccb27f8de61466e9095e179073cc71810ad3b67d08d36a4735e03c0f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722622355734818866,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k47qb,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 43861b63-f926-47e1-a17d-4fe2f162b13b,},Annotations:map[string]string{io.kubernetes.container.hash: fb08b111,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee,PodSandboxId:8acb9191287bb74c85245ae5dd4020f348c043b48f779d174b149327f42ac1cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722622352092691925,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sjq5b,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e54c69c1-fdde-43c6-90d5-cd2171a4b1bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b7efc84,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91,PodSandboxId:34c16d3eea7b3cd4362b3047c069a573c9a4d5df466ecd8216730bb0dc1e4978,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722622332401646173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
e522cf6c1eb33fa299c33e4a0954c438,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a,PodSandboxId:493457b81a9b33bb2f456335a803dc0a849d461f7985091cd5de0e403999e4d1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722622332392128969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95626ffa0c0a69d2107137152d8db0de,},Annotation
s:map[string]string{io.kubernetes.container.hash: f4cdb800,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b,PodSandboxId:e79a3a9f456f791d78cdae09e3969abefaf7dd434d0b764ec3b94af04419be51,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722622332340723312,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 077351e9cb19dc5b7c66c7a0ed7b86f3,
},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd,PodSandboxId:45ee7a236c1aa73dd926a6dc514ff2ecf91fe25923cc2978dcde448c7c12ec1c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722622332340656756,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-250383,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ae432e52cfc2c93af6399703698e93,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5c6261a3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb9d8bf5-7192-4ca5-bb53-be3416550ee9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f825d06a22a74       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   835e9f0282b33       busybox-fc5497c4f-6vqf8
	2c939adbf7379       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   db7b4c3cee33e       kindnet-k47qb
	26eebc9ebbff1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   c9939974839ae       kube-proxy-sjq5b
	fbb2a64ca9fbd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   aad616f403140       storage-provisioner
	beab6760bae27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   2                   62a3d90a23f3b       coredns-7db6d8ff4d-p22xc
	5cbb8f6618e46       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   ca6032e84f4e1       kube-scheduler-multinode-250383
	1225b7f1b1c1f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   c45c22fcbbaea       kube-apiserver-multinode-250383
	aceac5df534ea       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   6bc745953bc49       kube-controller-manager-multinode-250383
	2d2e98aabffd7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   f316c337fc455       etcd-multinode-250383
	9ac19b826e084       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   01171b0fa1c46       coredns-7db6d8ff4d-p22xc
	080296105c460       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   e9f1315c6d603       busybox-fc5497c4f-6vqf8
	595e69fd3041a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   aca3fcdb5ef7d       storage-provisioner
	b117f7898e49b       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   3a0fc305ccb27       kindnet-k47qb
	4ad8d7e314b1e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   8acb9191287bb       kube-proxy-sjq5b
	bfcb3f51365d2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   34c16d3eea7b3       kube-scheduler-multinode-250383
	995dfd5bd7840       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   493457b81a9b3       etcd-multinode-250383
	98da8355877a7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   e79a3a9f456f7       kube-controller-manager-multinode-250383
	e1c10cb7907ec       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   45ee7a236c1aa       kube-apiserver-multinode-250383
	
	
	==> coredns [9ac19b826e084bde2ded377df9ebf2a109e0b61827f32a3031225621977d4cc5] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49568 - 7292 "HINFO IN 6251548447806641683.3005424376035411823. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011124339s
	
	
	==> coredns [beab6760bae27ed786434fe87ddd0db2a2b31ec1f142098ff4e0591d217b033c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38561 - 32959 "HINFO IN 5626239399824007099.1786114741129606773. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011224542s
	
	
	==> describe nodes <==
	Name:               multinode-250383
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-250383
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=multinode-250383
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T18_12_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:12:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-250383
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:23:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 18:19:15 +0000   Fri, 02 Aug 2024 18:12:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 18:19:15 +0000   Fri, 02 Aug 2024 18:12:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 18:19:15 +0000   Fri, 02 Aug 2024 18:12:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 18:19:15 +0000   Fri, 02 Aug 2024 18:12:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.67
	  Hostname:    multinode-250383
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 931f9cb08c51491586b0f1037696dd39
	  System UUID:                931f9cb0-8c51-4915-86b0-f1037696dd39
	  Boot ID:                    f9a248e1-f9c4-46a4-85cf-fb7d585f9911
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6vqf8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m43s
	  kube-system                 coredns-7db6d8ff4d-p22xc                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-250383                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-k47qb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-250383             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-250383    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-sjq5b                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-250383             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node multinode-250383 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node multinode-250383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node multinode-250383 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-250383 event: Registered Node multinode-250383 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-250383 status is now: NodeReady
	  Normal  Starting                 4m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node multinode-250383 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node multinode-250383 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node multinode-250383 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m54s                  node-controller  Node multinode-250383 event: Registered Node multinode-250383 in Controller
	
	
	Name:               multinode-250383-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-250383-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=multinode-250383
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_02T18_19_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:19:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-250383-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:20:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 02 Aug 2024 18:20:27 +0000   Fri, 02 Aug 2024 18:21:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 02 Aug 2024 18:20:27 +0000   Fri, 02 Aug 2024 18:21:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 02 Aug 2024 18:20:27 +0000   Fri, 02 Aug 2024 18:21:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 02 Aug 2024 18:20:27 +0000   Fri, 02 Aug 2024 18:21:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    multinode-250383-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 917399c6f0634bfda38537db48c4baf3
	  System UUID:                917399c6-f063-4bfd-a385-37db48c4baf3
	  Boot ID:                    6104aa80-bea4-4d1f-90d3-c3fa75d62b95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hntjs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-xdnv2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-w4hmf           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-250383-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-250383-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-250383-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m46s                  kubelet          Node multinode-250383-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m25s)  kubelet          Node multinode-250383-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m25s)  kubelet          Node multinode-250383-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m25s)  kubelet          Node multinode-250383-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-250383-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                    node-controller  Node multinode-250383-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.053259] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.198867] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.127623] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.271889] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +3.953768] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +3.846237] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.057590] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.979885] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.102059] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.327660] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.826460] systemd-fstab-generator[1466]: Ignoring "noauto" option for root device
	[  +5.172099] kauditd_printk_skb: 58 callbacks suppressed
	[Aug 2 18:13] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 2 18:18] systemd-fstab-generator[2773]: Ignoring "noauto" option for root device
	[  +0.145531] systemd-fstab-generator[2785]: Ignoring "noauto" option for root device
	[  +0.171702] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.139256] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.390036] systemd-fstab-generator[2928]: Ignoring "noauto" option for root device
	[Aug 2 18:19] systemd-fstab-generator[3067]: Ignoring "noauto" option for root device
	[  +0.081906] kauditd_printk_skb: 110 callbacks suppressed
	[  +1.752381] systemd-fstab-generator[3191]: Ignoring "noauto" option for root device
	[  +4.690562] kauditd_printk_skb: 76 callbacks suppressed
	[ +11.911951] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.329741] systemd-fstab-generator[4041]: Ignoring "noauto" option for root device
	[ +17.500822] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2d2e98aabffd740ba129f2df09f3383baf6f2135ff8bf660d0af74a6a08e7aa9] <==
	{"level":"info","ts":"2024-08-02T18:19:12.823979Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-02T18:19:12.823988Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-02T18:19:12.824247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 switched to configuration voters=(929259593797349653)"}
	{"level":"info","ts":"2024-08-02T18:19:12.824319Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"429166af17098d53","local-member-id":"ce564ad586a3115","added-peer-id":"ce564ad586a3115","added-peer-peer-urls":["https://192.168.39.67:2380"]}
	{"level":"info","ts":"2024-08-02T18:19:12.824431Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"429166af17098d53","local-member-id":"ce564ad586a3115","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:19:12.824503Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:19:12.838838Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-02T18:19:12.839087Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ce564ad586a3115","initial-advertise-peer-urls":["https://192.168.39.67:2380"],"listen-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.67:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-02T18:19:12.83913Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-02T18:19:12.839241Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-08-02T18:19:12.839261Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-08-02T18:19:13.992573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-02T18:19:13.992623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-02T18:19:13.992663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 received MsgPreVoteResp from ce564ad586a3115 at term 2"}
	{"level":"info","ts":"2024-08-02T18:19:13.992678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became candidate at term 3"}
	{"level":"info","ts":"2024-08-02T18:19:13.992692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 received MsgVoteResp from ce564ad586a3115 at term 3"}
	{"level":"info","ts":"2024-08-02T18:19:13.9927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce564ad586a3115 became leader at term 3"}
	{"level":"info","ts":"2024-08-02T18:19:13.99271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce564ad586a3115 elected leader ce564ad586a3115 at term 3"}
	{"level":"info","ts":"2024-08-02T18:19:14.002705Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ce564ad586a3115","local-member-attributes":"{Name:multinode-250383 ClientURLs:[https://192.168.39.67:2379]}","request-path":"/0/members/ce564ad586a3115/attributes","cluster-id":"429166af17098d53","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-02T18:19:14.002895Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:19:14.004514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:19:14.009499Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-02T18:19:14.00953Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-02T18:19:14.010844Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-02T18:19:14.011352Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.67:2379"}
	
	
	==> etcd [995dfd5bd784015f54742a72568772b6a9655f76e7a07c6e79b3bd18eefaaf3a] <==
	{"level":"info","ts":"2024-08-02T18:13:15.457374Z","caller":"traceutil/trace.go:171","msg":"trace[1943046641] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"151.811147ms","start":"2024-08-02T18:13:15.305552Z","end":"2024-08-02T18:13:15.457363Z","steps":["trace[1943046641] 'process raft request'  (duration: 151.2771ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:13:25.784905Z","caller":"traceutil/trace.go:171","msg":"trace[1248989928] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"192.6424ms","start":"2024-08-02T18:13:25.592236Z","end":"2024-08-02T18:13:25.784879Z","steps":["trace[1248989928] 'process raft request'  (duration: 192.492219ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:13:26.04872Z","caller":"traceutil/trace.go:171","msg":"trace[888830746] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:530; }","duration":"107.170707ms","start":"2024-08-02T18:13:25.941529Z","end":"2024-08-02T18:13:26.0487Z","steps":["trace[888830746] 'read index received'  (duration: 54.373979ms)","trace[888830746] 'applied index is now lower than readState.Index'  (duration: 52.795628ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:13:26.048912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.328428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T18:13:26.049005Z","caller":"traceutil/trace.go:171","msg":"trace[617738981] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:505; }","duration":"107.533589ms","start":"2024-08-02T18:13:25.941454Z","end":"2024-08-02T18:13:26.048988Z","steps":["trace[617738981] 'agreement among raft nodes before linearized reading'  (duration: 107.376602ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:13:26.049067Z","caller":"traceutil/trace.go:171","msg":"trace[1224528198] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"206.304026ms","start":"2024-08-02T18:13:25.842748Z","end":"2024-08-02T18:13:26.049052Z","steps":["trace[1224528198] 'process raft request'  (duration: 153.19601ms)","trace[1224528198] 'compare'  (duration: 52.642312ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T18:13:26.343957Z","caller":"traceutil/trace.go:171","msg":"trace[1351660476] linearizableReadLoop","detail":"{readStateIndex:532; appliedIndex:531; }","duration":"239.88476ms","start":"2024-08-02T18:13:26.104056Z","end":"2024-08-02T18:13:26.34394Z","steps":["trace[1351660476] 'read index received'  (duration: 182.025283ms)","trace[1351660476] 'applied index is now lower than readState.Index'  (duration: 57.858672ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T18:13:26.344098Z","caller":"traceutil/trace.go:171","msg":"trace[804152674] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"275.792401ms","start":"2024-08-02T18:13:26.068297Z","end":"2024-08-02T18:13:26.34409Z","steps":["trace[804152674] 'process raft request'  (duration: 217.827469ms)","trace[804152674] 'compare'  (duration: 57.750041ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:13:26.344294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.238081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T18:13:26.344348Z","caller":"traceutil/trace.go:171","msg":"trace[1019278426] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:506; }","duration":"240.316483ms","start":"2024-08-02T18:13:26.104021Z","end":"2024-08-02T18:13:26.344337Z","steps":["trace[1019278426] 'agreement among raft nodes before linearized reading'  (duration: 240.243881ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:14:13.081735Z","caller":"traceutil/trace.go:171","msg":"trace[2060008207] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"229.196911ms","start":"2024-08-02T18:14:12.852506Z","end":"2024-08-02T18:14:13.081703Z","steps":["trace[2060008207] 'process raft request'  (duration: 224.462043ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:14:13.081777Z","caller":"traceutil/trace.go:171","msg":"trace[1122326976] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"148.045451ms","start":"2024-08-02T18:14:12.933714Z","end":"2024-08-02T18:14:13.081759Z","steps":["trace[1122326976] 'process raft request'  (duration: 147.98921ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:14:13.081822Z","caller":"traceutil/trace.go:171","msg":"trace[466374812] linearizableReadLoop","detail":"{readStateIndex:628; appliedIndex:627; }","duration":"165.453332ms","start":"2024-08-02T18:14:12.916361Z","end":"2024-08-02T18:14:13.081814Z","steps":["trace[466374812] 'read index received'  (duration: 160.642107ms)","trace[466374812] 'applied index is now lower than readState.Index'  (duration: 4.810041ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:14:13.081986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.544692ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T18:14:13.082836Z","caller":"traceutil/trace.go:171","msg":"trace[425686911] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:593; }","duration":"166.487424ms","start":"2024-08-02T18:14:12.916339Z","end":"2024-08-02T18:14:13.082826Z","steps":["trace[425686911] 'agreement among raft nodes before linearized reading'  (duration: 165.496941ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T18:17:26.950964Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-02T18:17:26.951082Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-250383","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	{"level":"warn","ts":"2024-08-02T18:17:26.951161Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:17:26.959204Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:17:26.996733Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:17:26.996819Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.67:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-02T18:17:26.996943Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ce564ad586a3115","current-leader-member-id":"ce564ad586a3115"}
	{"level":"info","ts":"2024-08-02T18:17:26.999688Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-08-02T18:17:26.999956Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.67:2380"}
	{"level":"info","ts":"2024-08-02T18:17:27.000021Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-250383","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.67:2380"],"advertise-client-urls":["https://192.168.39.67:2379"]}
	
	
	==> kernel <==
	 18:23:21 up 11 min,  0 users,  load average: 0.06, 0.15, 0.10
	Linux multinode-250383 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2c939adbf73795bd9c2b2c0a4641f696c845801352849cede09ab386e4bb05cc] <==
	I0802 18:22:17.457264       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:22:27.461827       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:22:27.461888       1 main.go:299] handling current node
	I0802 18:22:27.461920       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:22:27.461928       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:22:37.464148       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:22:37.464329       1 main.go:299] handling current node
	I0802 18:22:37.464380       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:22:37.464400       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:22:47.463128       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:22:47.463187       1 main.go:299] handling current node
	I0802 18:22:47.463205       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:22:47.463211       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:22:57.456366       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:22:57.456418       1 main.go:299] handling current node
	I0802 18:22:57.456442       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:22:57.456448       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:23:07.460450       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:23:07.460545       1 main.go:299] handling current node
	I0802 18:23:07.460562       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:23:07.460567       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:23:17.456946       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:23:17.456991       1 main.go:299] handling current node
	I0802 18:23:17.457012       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:23:17.457022       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [b117f7898e49b5314c511fa079521ea0e896ae19bf24ba5b595fc32bda933b79] <==
	I0802 18:16:46.754163       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:16:56.756085       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:16:56.756138       1 main.go:299] handling current node
	I0802 18:16:56.756157       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:16:56.756162       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:16:56.756283       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:16:56.756302       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:17:06.754959       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:17:06.755088       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:17:06.755244       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:17:06.755271       1 main.go:299] handling current node
	I0802 18:17:06.755294       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:17:06.755310       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:17:16.758734       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:17:16.758852       1 main.go:299] handling current node
	I0802 18:17:16.758885       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:17:16.758908       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	I0802 18:17:16.759070       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:17:16.759105       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:17:26.761584       1 main.go:295] Handling node with IPs: map[192.168.39.172:{}]
	I0802 18:17:26.761627       1 main.go:322] Node multinode-250383-m03 has CIDR [10.244.3.0/24] 
	I0802 18:17:26.761765       1 main.go:295] Handling node with IPs: map[192.168.39.67:{}]
	I0802 18:17:26.761783       1 main.go:299] handling current node
	I0802 18:17:26.761798       1 main.go:295] Handling node with IPs: map[192.168.39.114:{}]
	I0802 18:17:26.761812       1 main.go:322] Node multinode-250383-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1225b7f1b1c1f6b63bb479e019756883806da897058b865c00bb76257a5f4b6f] <==
	I0802 18:19:15.659662       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0802 18:19:15.659802       1 policy_source.go:224] refreshing policies
	I0802 18:19:15.660915       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0802 18:19:15.665817       1 aggregator.go:165] initial CRD sync complete...
	I0802 18:19:15.665857       1 autoregister_controller.go:141] Starting autoregister controller
	I0802 18:19:15.665866       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0802 18:19:15.666586       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0802 18:19:15.669042       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 18:19:15.756132       1 shared_informer.go:320] Caches are synced for configmaps
	I0802 18:19:15.758206       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0802 18:19:15.759860       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 18:19:15.760108       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 18:19:15.760570       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0802 18:19:15.760596       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0802 18:19:15.765562       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0802 18:19:15.766754       1 cache.go:39] Caches are synced for autoregister controller
	E0802 18:19:15.772996       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0802 18:19:16.567720       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 18:19:17.292244       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0802 18:19:17.402424       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0802 18:19:17.417854       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0802 18:19:17.484879       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 18:19:17.493175       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 18:19:28.117732       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0802 18:19:28.217567       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e1c10cb7907ecba436d3ed390335bd8a01e0e76aea80cedbbf8dd94e626550fd] <==
	E0802 18:14:43.523577       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0802 18:14:43.523709       1 timeout.go:142] post-timeout activity - time-elapsed: 2.44987ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-250383-m03" result: <nil>
	I0802 18:17:26.943442       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0802 18:17:26.954408       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0802 18:17:26.955019       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0802 18:17:26.955545       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0802 18:17:26.955602       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0802 18:17:26.955732       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0802 18:17:26.955755       1 controller.go:129] Ending legacy_token_tracking_controller
	I0802 18:17:26.955763       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0802 18:17:26.955789       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0802 18:17:26.955824       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0802 18:17:26.955842       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0802 18:17:26.955861       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I0802 18:17:26.955886       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0802 18:17:26.955920       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0802 18:17:26.955946       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0802 18:17:26.955959       1 establishing_controller.go:87] Shutting down EstablishingController
	I0802 18:17:26.955985       1 naming_controller.go:302] Shutting down NamingConditionController
	I0802 18:17:26.955999       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0802 18:17:26.956018       1 controller.go:167] Shutting down OpenAPI controller
	I0802 18:17:26.956058       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0802 18:17:26.956082       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0802 18:17:26.956106       1 available_controller.go:439] Shutting down AvailableConditionController
	I0802 18:17:26.965026       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	
	
	==> kube-controller-manager [98da8355877a722072c1b56aec3c3004426aa38aacd5bc4bd87df566e526f16b] <==
	I0802 18:12:51.336650       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0802 18:13:15.459300       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-250383-m02\" does not exist"
	I0802 18:13:15.494795       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-250383-m02" podCIDRs=["10.244.1.0/24"]
	I0802 18:13:16.340106       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-250383-m02"
	I0802 18:13:35.666433       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:13:38.113793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.536671ms"
	I0802 18:13:38.144778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.932535ms"
	I0802 18:13:38.144881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.734µs"
	I0802 18:13:41.296100       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.841752ms"
	I0802 18:13:41.296338       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.087µs"
	I0802 18:13:41.842981       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.085295ms"
	I0802 18:13:41.844164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.145µs"
	I0802 18:14:13.083925       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-250383-m03\" does not exist"
	I0802 18:14:13.085674       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:14:13.096880       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-250383-m03" podCIDRs=["10.244.2.0/24"]
	I0802 18:14:16.360616       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-250383-m03"
	I0802 18:14:33.544634       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:15:01.368336       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:15:02.443544       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:15:02.443611       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-250383-m03\" does not exist"
	I0802 18:15:02.461051       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-250383-m03" podCIDRs=["10.244.3.0/24"]
	I0802 18:15:21.784700       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:16:01.412640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m03"
	I0802 18:16:01.459708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.833317ms"
	I0802 18:16:01.459786       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.604µs"
	
	
	==> kube-controller-manager [aceac5df534eaa7f8cee9a49da8430b46c9228e0609dede1e2d195b1a6234af6] <==
	I0802 18:19:57.051380       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-250383-m02\" does not exist"
	I0802 18:19:57.061965       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-250383-m02" podCIDRs=["10.244.1.0/24"]
	I0802 18:19:58.942833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.27µs"
	I0802 18:19:58.983544       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.888µs"
	I0802 18:19:58.991076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.857µs"
	I0802 18:19:59.014724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.644µs"
	I0802 18:19:59.024194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.736µs"
	I0802 18:19:59.026299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.767µs"
	I0802 18:20:16.573835       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:20:16.592119       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.686µs"
	I0802 18:20:16.606202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.202µs"
	I0802 18:20:20.180866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.513731ms"
	I0802 18:20:20.181077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.26µs"
	I0802 18:20:34.528379       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:20:35.602946       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:20:35.603544       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-250383-m03\" does not exist"
	I0802 18:20:35.626991       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-250383-m03" podCIDRs=["10.244.2.0/24"]
	I0802 18:20:54.850311       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:21:00.138739       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-250383-m02"
	I0802 18:21:43.004790       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.682948ms"
	I0802 18:21:43.006654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.287µs"
	I0802 18:21:47.861506       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fb7dl"
	I0802 18:21:47.888236       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-fb7dl"
	I0802 18:21:47.888359       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-hnzvs"
	I0802 18:21:47.914633       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-hnzvs"
	
	
	==> kube-proxy [26eebc9ebbff1976ff7d1e06136733e5480d90e28bfe93063a2e4a07ca42988f] <==
	I0802 18:19:16.617757       1 server_linux.go:69] "Using iptables proxy"
	I0802 18:19:16.631760       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	I0802 18:19:16.696037       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 18:19:16.696109       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 18:19:16.696126       1 server_linux.go:165] "Using iptables Proxier"
	I0802 18:19:16.699172       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 18:19:16.699512       1 server.go:872] "Version info" version="v1.30.3"
	I0802 18:19:16.699592       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:19:16.700887       1 config.go:192] "Starting service config controller"
	I0802 18:19:16.700971       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 18:19:16.701058       1 config.go:101] "Starting endpoint slice config controller"
	I0802 18:19:16.701102       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 18:19:16.701735       1 config.go:319] "Starting node config controller"
	I0802 18:19:16.701805       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 18:19:16.802028       1 shared_informer.go:320] Caches are synced for node config
	I0802 18:19:16.802128       1 shared_informer.go:320] Caches are synced for service config
	I0802 18:19:16.802156       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [4ad8d7e314b1e05057ec782892b65ddb4113e15d934ffbaf89ca357d58d422ee] <==
	I0802 18:12:32.548287       1 server_linux.go:69] "Using iptables proxy"
	I0802 18:12:32.563960       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.67"]
	I0802 18:12:32.612985       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 18:12:32.613078       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 18:12:32.613110       1 server_linux.go:165] "Using iptables Proxier"
	I0802 18:12:32.617128       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 18:12:32.617622       1 server.go:872] "Version info" version="v1.30.3"
	I0802 18:12:32.617687       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:12:32.619367       1 config.go:192] "Starting service config controller"
	I0802 18:12:32.619612       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 18:12:32.619670       1 config.go:101] "Starting endpoint slice config controller"
	I0802 18:12:32.619676       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 18:12:32.620798       1 config.go:319] "Starting node config controller"
	I0802 18:12:32.620829       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 18:12:32.720013       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0802 18:12:32.720127       1 shared_informer.go:320] Caches are synced for service config
	I0802 18:12:32.721005       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5cbb8f6618e46a91e9ff90c4351c77c97371f0b25a3189239891e0b0777810d7] <==
	I0802 18:19:14.466763       1 serving.go:380] Generated self-signed cert in-memory
	W0802 18:19:15.627301       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 18:19:15.627372       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 18:19:15.627383       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 18:19:15.627389       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 18:19:15.667417       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0802 18:19:15.669270       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:19:15.673208       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 18:19:15.673243       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 18:19:15.673811       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0802 18:19:15.673992       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 18:19:15.773606       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bfcb3f51365d2b0a1d05187b70fb74f991ffa24985ea938f53cef270b1c51c91] <==
	E0802 18:12:14.999744       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 18:12:14.999875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0802 18:12:14.999941       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0802 18:12:15.815085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 18:12:15.815140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 18:12:15.913760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 18:12:15.914131       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 18:12:15.917412       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 18:12:15.917587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0802 18:12:15.968731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 18:12:15.968866       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0802 18:12:16.024100       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 18:12:16.024144       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 18:12:16.037621       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 18:12:16.037721       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 18:12:16.117678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 18:12:16.117801       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 18:12:16.211388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 18:12:16.211553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 18:12:16.246025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0802 18:12:16.246523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0802 18:12:16.258345       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 18:12:16.258584       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0802 18:12:17.887340       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0802 18:17:26.953612       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.833716    3198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43861b63-f926-47e1-a17d-4fe2f162b13b-xtables-lock\") pod \"kindnet-k47qb\" (UID: \"43861b63-f926-47e1-a17d-4fe2f162b13b\") " pod="kube-system/kindnet-k47qb"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.833825    3198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/43861b63-f926-47e1-a17d-4fe2f162b13b-cni-cfg\") pod \"kindnet-k47qb\" (UID: \"43861b63-f926-47e1-a17d-4fe2f162b13b\") " pod="kube-system/kindnet-k47qb"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.834136    3198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43861b63-f926-47e1-a17d-4fe2f162b13b-lib-modules\") pod \"kindnet-k47qb\" (UID: \"43861b63-f926-47e1-a17d-4fe2f162b13b\") " pod="kube-system/kindnet-k47qb"
	Aug 02 18:19:15 multinode-250383 kubelet[3198]: I0802 18:19:15.834449    3198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e54c69c1-fdde-43c6-90d5-cd2171a4b1bc-xtables-lock\") pod \"kube-proxy-sjq5b\" (UID: \"e54c69c1-fdde-43c6-90d5-cd2171a4b1bc\") " pod="kube-system/kube-proxy-sjq5b"
	Aug 02 18:19:19 multinode-250383 kubelet[3198]: I0802 18:19:19.926752    3198 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 02 18:20:11 multinode-250383 kubelet[3198]: E0802 18:20:11.853610    3198 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 18:20:11 multinode-250383 kubelet[3198]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 18:20:11 multinode-250383 kubelet[3198]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 18:20:11 multinode-250383 kubelet[3198]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 18:20:11 multinode-250383 kubelet[3198]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 18:21:11 multinode-250383 kubelet[3198]: E0802 18:21:11.853595    3198 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 18:21:11 multinode-250383 kubelet[3198]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 18:21:11 multinode-250383 kubelet[3198]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 18:21:11 multinode-250383 kubelet[3198]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 18:21:11 multinode-250383 kubelet[3198]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 18:22:11 multinode-250383 kubelet[3198]: E0802 18:22:11.854675    3198 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 18:22:11 multinode-250383 kubelet[3198]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 18:22:11 multinode-250383 kubelet[3198]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 18:22:11 multinode-250383 kubelet[3198]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 18:22:11 multinode-250383 kubelet[3198]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 18:23:11 multinode-250383 kubelet[3198]: E0802 18:23:11.863914    3198 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 18:23:11 multinode-250383 kubelet[3198]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 18:23:11 multinode-250383 kubelet[3198]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 18:23:11 multinode-250383 kubelet[3198]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 18:23:11 multinode-250383 kubelet[3198]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:23:20.804994   43887 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19355-5397/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-250383 -n multinode-250383
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-250383 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.27s)

                                                
                                    
x
+
TestPreload (192.99s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-999194 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0802 18:27:43.928182   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-999194 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m51.309862383s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-999194 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-999194 image pull gcr.io/k8s-minikube/busybox: (2.501887833s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-999194
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-999194: (7.284881253s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-999194 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0802 18:29:57.305205   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 18:30:14.261690   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-999194 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m9.152178131s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-999194 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-08-02 18:30:21.041468587 +0000 UTC m=+3842.159636195
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-999194 -n test-preload-999194
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-999194 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n multinode-250383 sudo cat                                       | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /home/docker/cp-test_multinode-250383-m03_multinode-250383.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-250383 cp multinode-250383-m03:/home/docker/cp-test.txt                       | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m02:/home/docker/cp-test_multinode-250383-m03_multinode-250383-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n                                                                 | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | multinode-250383-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-250383 ssh -n multinode-250383-m02 sudo cat                                   | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	|         | /home/docker/cp-test_multinode-250383-m03_multinode-250383-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-250383 node stop m03                                                          | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:14 UTC |
	| node    | multinode-250383 node start                                                             | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:14 UTC | 02 Aug 24 18:15 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-250383                                                                | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:15 UTC |                     |
	| stop    | -p multinode-250383                                                                     | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:15 UTC |                     |
	| start   | -p multinode-250383                                                                     | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:17 UTC | 02 Aug 24 18:20 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-250383                                                                | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:20 UTC |                     |
	| node    | multinode-250383 node delete                                                            | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:20 UTC | 02 Aug 24 18:21 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-250383 stop                                                                   | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:21 UTC |                     |
	| start   | -p multinode-250383                                                                     | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:23 UTC | 02 Aug 24 18:26 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-250383                                                                | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:26 UTC |                     |
	| start   | -p multinode-250383-m02                                                                 | multinode-250383-m02 | jenkins | v1.33.1 | 02 Aug 24 18:26 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-250383-m03                                                                 | multinode-250383-m03 | jenkins | v1.33.1 | 02 Aug 24 18:26 UTC | 02 Aug 24 18:27 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-250383                                                                 | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:27 UTC |                     |
	| delete  | -p multinode-250383-m03                                                                 | multinode-250383-m03 | jenkins | v1.33.1 | 02 Aug 24 18:27 UTC | 02 Aug 24 18:27 UTC |
	| delete  | -p multinode-250383                                                                     | multinode-250383     | jenkins | v1.33.1 | 02 Aug 24 18:27 UTC | 02 Aug 24 18:27 UTC |
	| start   | -p test-preload-999194                                                                  | test-preload-999194  | jenkins | v1.33.1 | 02 Aug 24 18:27 UTC | 02 Aug 24 18:29 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-999194 image pull                                                          | test-preload-999194  | jenkins | v1.33.1 | 02 Aug 24 18:29 UTC | 02 Aug 24 18:29 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-999194                                                                  | test-preload-999194  | jenkins | v1.33.1 | 02 Aug 24 18:29 UTC | 02 Aug 24 18:29 UTC |
	| start   | -p test-preload-999194                                                                  | test-preload-999194  | jenkins | v1.33.1 | 02 Aug 24 18:29 UTC | 02 Aug 24 18:30 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-999194 image list                                                          | test-preload-999194  | jenkins | v1.33.1 | 02 Aug 24 18:30 UTC | 02 Aug 24 18:30 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:29:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:29:11.712954   46274 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:29:11.713075   46274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:29:11.713087   46274 out.go:304] Setting ErrFile to fd 2...
	I0802 18:29:11.713094   46274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:29:11.713274   46274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:29:11.713818   46274 out.go:298] Setting JSON to false
	I0802 18:29:11.714682   46274 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4296,"bootTime":1722619056,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:29:11.714738   46274 start.go:139] virtualization: kvm guest
	I0802 18:29:11.716977   46274 out.go:177] * [test-preload-999194] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:29:11.718377   46274 notify.go:220] Checking for updates...
	I0802 18:29:11.718388   46274 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:29:11.719803   46274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:29:11.721182   46274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:29:11.722434   46274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:29:11.723640   46274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:29:11.724939   46274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:29:11.726765   46274 config.go:182] Loaded profile config "test-preload-999194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0802 18:29:11.727240   46274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:29:11.727358   46274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:29:11.741865   46274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40321
	I0802 18:29:11.742222   46274 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:29:11.742747   46274 main.go:141] libmachine: Using API Version  1
	I0802 18:29:11.742785   46274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:29:11.743084   46274 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:29:11.743398   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	I0802 18:29:11.745251   46274 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0802 18:29:11.746414   46274 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:29:11.746696   46274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:29:11.746727   46274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:29:11.760729   46274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
	I0802 18:29:11.761142   46274 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:29:11.761575   46274 main.go:141] libmachine: Using API Version  1
	I0802 18:29:11.761607   46274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:29:11.761907   46274 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:29:11.762081   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	I0802 18:29:11.795437   46274 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:29:11.796611   46274 start.go:297] selected driver: kvm2
	I0802 18:29:11.796624   46274 start.go:901] validating driver "kvm2" against &{Name:test-preload-999194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-999194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:29:11.796729   46274 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:29:11.797377   46274 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:29:11.797458   46274 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:29:11.811811   46274 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:29:11.812151   46274 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:29:11.812219   46274 cni.go:84] Creating CNI manager for ""
	I0802 18:29:11.812233   46274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:29:11.812304   46274 start.go:340] cluster config:
	{Name:test-preload-999194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-999194 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:29:11.812422   46274 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:29:11.814138   46274 out.go:177] * Starting "test-preload-999194" primary control-plane node in "test-preload-999194" cluster
	I0802 18:29:11.815305   46274 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0802 18:29:12.287202   46274 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0802 18:29:12.287225   46274 cache.go:56] Caching tarball of preloaded images
	I0802 18:29:12.287396   46274 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0802 18:29:12.289327   46274 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0802 18:29:12.290544   46274 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0802 18:29:12.391307   46274 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0802 18:29:23.892684   46274 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0802 18:29:23.892775   46274 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0802 18:29:24.732005   46274 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0802 18:29:24.732118   46274 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/config.json ...
	I0802 18:29:24.732331   46274 start.go:360] acquireMachinesLock for test-preload-999194: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:29:24.732396   46274 start.go:364] duration metric: took 44.683µs to acquireMachinesLock for "test-preload-999194"
	I0802 18:29:24.732408   46274 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:29:24.732416   46274 fix.go:54] fixHost starting: 
	I0802 18:29:24.732701   46274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:29:24.732731   46274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:29:24.747164   46274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0802 18:29:24.747774   46274 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:29:24.748209   46274 main.go:141] libmachine: Using API Version  1
	I0802 18:29:24.748223   46274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:29:24.748547   46274 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:29:24.748691   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	I0802 18:29:24.748881   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetState
	I0802 18:29:24.750419   46274 fix.go:112] recreateIfNeeded on test-preload-999194: state=Stopped err=<nil>
	I0802 18:29:24.750459   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	W0802 18:29:24.750616   46274 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:29:24.752955   46274 out.go:177] * Restarting existing kvm2 VM for "test-preload-999194" ...
	I0802 18:29:24.754318   46274 main.go:141] libmachine: (test-preload-999194) Calling .Start
	I0802 18:29:24.754490   46274 main.go:141] libmachine: (test-preload-999194) Ensuring networks are active...
	I0802 18:29:24.755176   46274 main.go:141] libmachine: (test-preload-999194) Ensuring network default is active
	I0802 18:29:24.755628   46274 main.go:141] libmachine: (test-preload-999194) Ensuring network mk-test-preload-999194 is active
	I0802 18:29:24.755959   46274 main.go:141] libmachine: (test-preload-999194) Getting domain xml...
	I0802 18:29:24.756625   46274 main.go:141] libmachine: (test-preload-999194) Creating domain...
	I0802 18:29:25.946765   46274 main.go:141] libmachine: (test-preload-999194) Waiting to get IP...
	I0802 18:29:25.947555   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:25.947970   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:25.948041   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:25.947953   46341 retry.go:31] will retry after 219.275439ms: waiting for machine to come up
	I0802 18:29:26.168409   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:26.168764   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:26.168792   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:26.168718   46341 retry.go:31] will retry after 357.822307ms: waiting for machine to come up
	I0802 18:29:26.528414   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:26.528833   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:26.528861   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:26.528766   46341 retry.go:31] will retry after 364.286387ms: waiting for machine to come up
	I0802 18:29:26.894295   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:26.894722   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:26.894746   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:26.894677   46341 retry.go:31] will retry after 594.016498ms: waiting for machine to come up
	I0802 18:29:27.490496   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:27.491014   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:27.491043   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:27.490967   46341 retry.go:31] will retry after 561.103173ms: waiting for machine to come up
	I0802 18:29:28.053653   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:28.054091   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:28.054127   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:28.053956   46341 retry.go:31] will retry after 867.377255ms: waiting for machine to come up
	I0802 18:29:28.922963   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:28.923323   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:28.923347   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:28.923278   46341 retry.go:31] will retry after 766.974607ms: waiting for machine to come up
	I0802 18:29:29.692239   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:29.692605   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:29.692634   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:29.692551   46341 retry.go:31] will retry after 1.458947484s: waiting for machine to come up
	I0802 18:29:31.153754   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:31.154190   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:31.154219   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:31.154141   46341 retry.go:31] will retry after 1.289333847s: waiting for machine to come up
	I0802 18:29:32.445606   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:32.446082   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:32.446109   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:32.446032   46341 retry.go:31] will retry after 1.513985927s: waiting for machine to come up
	I0802 18:29:33.961621   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:33.962038   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:33.962064   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:33.961995   46341 retry.go:31] will retry after 2.115756656s: waiting for machine to come up
	I0802 18:29:36.078973   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:36.079426   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:36.079449   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:36.079383   46341 retry.go:31] will retry after 3.310675669s: waiting for machine to come up
	I0802 18:29:39.393846   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:39.394156   46274 main.go:141] libmachine: (test-preload-999194) DBG | unable to find current IP address of domain test-preload-999194 in network mk-test-preload-999194
	I0802 18:29:39.394211   46274 main.go:141] libmachine: (test-preload-999194) DBG | I0802 18:29:39.394118   46341 retry.go:31] will retry after 3.100126781s: waiting for machine to come up
	I0802 18:29:42.497797   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.498217   46274 main.go:141] libmachine: (test-preload-999194) Found IP for machine: 192.168.39.115
	I0802 18:29:42.498245   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has current primary IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.498270   46274 main.go:141] libmachine: (test-preload-999194) Reserving static IP address...
	I0802 18:29:42.498666   46274 main.go:141] libmachine: (test-preload-999194) Reserved static IP address: 192.168.39.115
	I0802 18:29:42.498685   46274 main.go:141] libmachine: (test-preload-999194) Waiting for SSH to be available...
	I0802 18:29:42.498703   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "test-preload-999194", mac: "52:54:00:c4:99:f0", ip: "192.168.39.115"} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:42.498729   46274 main.go:141] libmachine: (test-preload-999194) DBG | skip adding static IP to network mk-test-preload-999194 - found existing host DHCP lease matching {name: "test-preload-999194", mac: "52:54:00:c4:99:f0", ip: "192.168.39.115"}
	I0802 18:29:42.498747   46274 main.go:141] libmachine: (test-preload-999194) DBG | Getting to WaitForSSH function...
	I0802 18:29:42.500487   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.500779   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:42.500812   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.500878   46274 main.go:141] libmachine: (test-preload-999194) DBG | Using SSH client type: external
	I0802 18:29:42.500909   46274 main.go:141] libmachine: (test-preload-999194) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/test-preload-999194/id_rsa (-rw-------)
	I0802 18:29:42.500942   46274 main.go:141] libmachine: (test-preload-999194) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/test-preload-999194/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 18:29:42.500965   46274 main.go:141] libmachine: (test-preload-999194) DBG | About to run SSH command:
	I0802 18:29:42.500977   46274 main.go:141] libmachine: (test-preload-999194) DBG | exit 0
	I0802 18:29:42.622838   46274 main.go:141] libmachine: (test-preload-999194) DBG | SSH cmd err, output: <nil>: 
	I0802 18:29:42.623199   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetConfigRaw
	I0802 18:29:42.623879   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetIP
	I0802 18:29:42.626428   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.626935   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:42.626965   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.627281   46274 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/config.json ...
	I0802 18:29:42.627512   46274 machine.go:94] provisionDockerMachine start ...
	I0802 18:29:42.627532   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	I0802 18:29:42.627765   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:29:42.630013   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.630327   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:42.630363   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.630468   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHPort
	I0802 18:29:42.630646   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:42.630812   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:42.631046   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHUsername
	I0802 18:29:42.631239   46274 main.go:141] libmachine: Using SSH client type: native
	I0802 18:29:42.631423   46274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0802 18:29:42.631434   46274 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:29:42.735376   46274 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0802 18:29:42.735409   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetMachineName
	I0802 18:29:42.735776   46274 buildroot.go:166] provisioning hostname "test-preload-999194"
	I0802 18:29:42.735800   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetMachineName
	I0802 18:29:42.735980   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:29:42.738545   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.738996   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:42.739026   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.739148   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHPort
	I0802 18:29:42.739305   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:42.739479   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:42.739581   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHUsername
	I0802 18:29:42.739744   46274 main.go:141] libmachine: Using SSH client type: native
	I0802 18:29:42.739947   46274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0802 18:29:42.739963   46274 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-999194 && echo "test-preload-999194" | sudo tee /etc/hostname
	I0802 18:29:42.856806   46274 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-999194
	
	I0802 18:29:42.856838   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:29:42.859433   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.859805   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:42.859830   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.860013   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHPort
	I0802 18:29:42.860183   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:42.860340   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:42.860449   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHUsername
	I0802 18:29:42.860633   46274 main.go:141] libmachine: Using SSH client type: native
	I0802 18:29:42.860806   46274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0802 18:29:42.860822   46274 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-999194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-999194/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-999194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:29:42.971260   46274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:29:42.971286   46274 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:29:42.971308   46274 buildroot.go:174] setting up certificates
	I0802 18:29:42.971316   46274 provision.go:84] configureAuth start
	I0802 18:29:42.971325   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetMachineName
	I0802 18:29:42.971632   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetIP
	I0802 18:29:42.974121   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.974515   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:42.974550   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.974723   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:29:42.977439   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.977848   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:42.977877   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:42.977937   46274 provision.go:143] copyHostCerts
	I0802 18:29:42.978001   46274 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:29:42.978013   46274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:29:42.978093   46274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:29:42.978201   46274 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:29:42.978211   46274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:29:42.978251   46274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:29:42.978339   46274 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:29:42.978348   46274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:29:42.978386   46274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:29:42.978480   46274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.test-preload-999194 san=[127.0.0.1 192.168.39.115 localhost minikube test-preload-999194]
	I0802 18:29:43.040220   46274 provision.go:177] copyRemoteCerts
	I0802 18:29:43.040282   46274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:29:43.040312   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:29:43.042875   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.043223   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:43.043252   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.043507   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHPort
	I0802 18:29:43.043721   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:43.043912   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHUsername
	I0802 18:29:43.044049   46274 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/test-preload-999194/id_rsa Username:docker}
	I0802 18:29:43.124814   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:29:43.146248   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0802 18:29:43.168704   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 18:29:43.189552   46274 provision.go:87] duration metric: took 218.221963ms to configureAuth
	I0802 18:29:43.189588   46274 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:29:43.189761   46274 config.go:182] Loaded profile config "test-preload-999194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0802 18:29:43.189825   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:29:43.192160   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.192475   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:43.192503   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.192636   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHPort
	I0802 18:29:43.192843   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:43.192973   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:43.193121   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHUsername
	I0802 18:29:43.193304   46274 main.go:141] libmachine: Using SSH client type: native
	I0802 18:29:43.193457   46274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0802 18:29:43.193472   46274 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:29:43.446613   46274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:29:43.446641   46274 machine.go:97] duration metric: took 819.114318ms to provisionDockerMachine
	I0802 18:29:43.446655   46274 start.go:293] postStartSetup for "test-preload-999194" (driver="kvm2")
	I0802 18:29:43.446668   46274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:29:43.446694   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	I0802 18:29:43.446985   46274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:29:43.447018   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:29:43.449433   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.449755   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:43.449790   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.449984   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHPort
	I0802 18:29:43.450129   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:43.450273   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHUsername
	I0802 18:29:43.450401   46274 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/test-preload-999194/id_rsa Username:docker}
	I0802 18:29:43.533141   46274 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:29:43.536995   46274 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:29:43.537019   46274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:29:43.537080   46274 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:29:43.537156   46274 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:29:43.537244   46274 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:29:43.545724   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:29:43.567354   46274 start.go:296] duration metric: took 120.685125ms for postStartSetup
	I0802 18:29:43.567394   46274 fix.go:56] duration metric: took 18.834977717s for fixHost
	I0802 18:29:43.567415   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:29:43.570026   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.570325   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:43.570356   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.570577   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHPort
	I0802 18:29:43.570766   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:43.570926   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:43.571021   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHUsername
	I0802 18:29:43.571162   46274 main.go:141] libmachine: Using SSH client type: native
	I0802 18:29:43.571368   46274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.115 22 <nil> <nil>}
	I0802 18:29:43.571383   46274 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 18:29:43.671415   46274 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722623383.647410851
	
	I0802 18:29:43.671449   46274 fix.go:216] guest clock: 1722623383.647410851
	I0802 18:29:43.671463   46274 fix.go:229] Guest: 2024-08-02 18:29:43.647410851 +0000 UTC Remote: 2024-08-02 18:29:43.567398446 +0000 UTC m=+31.887405197 (delta=80.012405ms)
	I0802 18:29:43.671493   46274 fix.go:200] guest clock delta is within tolerance: 80.012405ms
	I0802 18:29:43.671500   46274 start.go:83] releasing machines lock for "test-preload-999194", held for 18.939096206s
	I0802 18:29:43.671524   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	I0802 18:29:43.671780   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetIP
	I0802 18:29:43.674504   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.674844   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:43.674873   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.675045   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	I0802 18:29:43.675595   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	I0802 18:29:43.675785   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	I0802 18:29:43.675873   46274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:29:43.675911   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:29:43.676015   46274 ssh_runner.go:195] Run: cat /version.json
	I0802 18:29:43.676039   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:29:43.678419   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.678653   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.678827   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:43.678852   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.678993   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHPort
	I0802 18:29:43.679008   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:43.679044   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:43.679151   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:43.679201   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHPort
	I0802 18:29:43.679324   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHUsername
	I0802 18:29:43.679384   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:29:43.679464   46274 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/test-preload-999194/id_rsa Username:docker}
	I0802 18:29:43.679511   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHUsername
	I0802 18:29:43.679652   46274 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/test-preload-999194/id_rsa Username:docker}
	I0802 18:29:43.790363   46274 ssh_runner.go:195] Run: systemctl --version
	I0802 18:29:43.796369   46274 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:29:43.939383   46274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:29:43.944753   46274 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:29:43.944827   46274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:29:43.959948   46274 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 18:29:43.959970   46274 start.go:495] detecting cgroup driver to use...
	I0802 18:29:43.960040   46274 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:29:43.974836   46274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:29:43.987886   46274 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:29:43.987962   46274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:29:44.000227   46274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:29:44.012909   46274 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:29:44.120625   46274 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:29:44.253708   46274 docker.go:233] disabling docker service ...
	I0802 18:29:44.253785   46274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:29:44.267055   46274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:29:44.279142   46274 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:29:44.416346   46274 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:29:44.543642   46274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:29:44.556613   46274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:29:44.573626   46274 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0802 18:29:44.573682   46274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:29:44.583485   46274 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:29:44.583556   46274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:29:44.593123   46274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:29:44.602732   46274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:29:44.612123   46274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:29:44.622268   46274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:29:44.632347   46274 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:29:44.648327   46274 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:29:44.658216   46274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:29:44.667028   46274 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 18:29:44.667076   46274 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 18:29:44.679629   46274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:29:44.688270   46274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:29:44.814043   46274 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:29:44.940925   46274 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:29:44.940997   46274 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:29:44.945218   46274 start.go:563] Will wait 60s for crictl version
	I0802 18:29:44.945273   46274 ssh_runner.go:195] Run: which crictl
	I0802 18:29:44.948657   46274 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:29:44.984931   46274 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:29:44.985020   46274 ssh_runner.go:195] Run: crio --version
	I0802 18:29:45.010745   46274 ssh_runner.go:195] Run: crio --version
	I0802 18:29:45.041147   46274 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0802 18:29:45.042358   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetIP
	I0802 18:29:45.045004   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:45.045355   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:29:45.045386   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:29:45.045621   46274 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 18:29:45.049460   46274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:29:45.061318   46274 kubeadm.go:883] updating cluster {Name:test-preload-999194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-999194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:29:45.061419   46274 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0802 18:29:45.061466   46274 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:29:45.093932   46274 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0802 18:29:45.093986   46274 ssh_runner.go:195] Run: which lz4
	I0802 18:29:45.097683   46274 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 18:29:45.101647   46274 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 18:29:45.101675   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0802 18:29:46.468471   46274 crio.go:462] duration metric: took 1.370823628s to copy over tarball
	I0802 18:29:46.468536   46274 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 18:29:48.763708   46274 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.295141148s)
	I0802 18:29:48.763743   46274 crio.go:469] duration metric: took 2.29524133s to extract the tarball
	I0802 18:29:48.763751   46274 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 18:29:48.803302   46274 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:29:48.841829   46274 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0802 18:29:48.841854   46274 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0802 18:29:48.841919   46274 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:29:48.841949   46274 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0802 18:29:48.841977   46274 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0802 18:29:48.842027   46274 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0802 18:29:48.842054   46274 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0802 18:29:48.842116   46274 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0802 18:29:48.842135   46274 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0802 18:29:48.842144   46274 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 18:29:48.843498   46274 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 18:29:48.843517   46274 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0802 18:29:48.843515   46274 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0802 18:29:48.843522   46274 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:29:48.843500   46274 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0802 18:29:48.843496   46274 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0802 18:29:48.843501   46274 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0802 18:29:48.843557   46274 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0802 18:29:49.033626   46274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0802 18:29:49.069113   46274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0802 18:29:49.075498   46274 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0802 18:29:49.075534   46274 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0802 18:29:49.075570   46274 ssh_runner.go:195] Run: which crictl
	I0802 18:29:49.076156   46274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0802 18:29:49.091289   46274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0802 18:29:49.091295   46274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0802 18:29:49.096107   46274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0802 18:29:49.124027   46274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0802 18:29:49.124035   46274 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0802 18:29:49.124074   46274 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0802 18:29:49.124114   46274 ssh_runner.go:195] Run: which crictl
	I0802 18:29:49.139650   46274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0802 18:29:49.158396   46274 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0802 18:29:49.158442   46274 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0802 18:29:49.158500   46274 ssh_runner.go:195] Run: which crictl
	I0802 18:29:49.206525   46274 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0802 18:29:49.206572   46274 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0802 18:29:49.206642   46274 ssh_runner.go:195] Run: which crictl
	I0802 18:29:49.239427   46274 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0802 18:29:49.239463   46274 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0802 18:29:49.239503   46274 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0802 18:29:49.239539   46274 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0802 18:29:49.239588   46274 ssh_runner.go:195] Run: which crictl
	I0802 18:29:49.239507   46274 ssh_runner.go:195] Run: which crictl
	I0802 18:29:49.242214   46274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0802 18:29:49.242319   46274 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0802 18:29:49.242395   46274 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0802 18:29:49.257674   46274 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0802 18:29:49.257720   46274 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0802 18:29:49.257765   46274 ssh_runner.go:195] Run: which crictl
	I0802 18:29:49.257767   46274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0802 18:29:49.257802   46274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0802 18:29:49.257842   46274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0802 18:29:49.257890   46274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0802 18:29:49.311293   46274 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0802 18:29:49.311395   46274 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0802 18:29:49.311417   46274 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0802 18:29:49.311432   46274 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0802 18:29:49.311467   46274 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0802 18:29:49.357319   46274 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0802 18:29:49.357426   46274 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0802 18:29:49.367940   46274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0802 18:29:49.368056   46274 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0802 18:29:49.368107   46274 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0802 18:29:49.368160   46274 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0802 18:29:49.368188   46274 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0802 18:29:49.368188   46274 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0802 18:29:49.368267   46274 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0802 18:29:49.750977   46274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:29:52.028518   46274 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.717025551s)
	I0802 18:29:52.028570   46274 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0802 18:29:52.028619   46274 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (2.717167284s)
	I0802 18:29:52.028648   46274 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0802 18:29:52.028656   46274 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0802 18:29:52.028680   46274 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.671236967s)
	I0802 18:29:52.028701   46274 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0802 18:29:52.028709   46274 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0802 18:29:52.028743   46274 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.660783585s)
	I0802 18:29:52.028780   46274 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0802 18:29:52.028797   46274 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.660515187s)
	I0802 18:29:52.028812   46274 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0802 18:29:52.028841   46274 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.660628739s)
	I0802 18:29:52.028870   46274 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (2.660687778s)
	I0802 18:29:52.028875   46274 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0802 18:29:52.028846   46274 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0802 18:29:52.028881   46274 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0802 18:29:52.028917   46274 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.277912007s)
	I0802 18:29:52.378457   46274 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0802 18:29:52.378510   46274 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0802 18:29:52.378531   46274 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0802 18:29:52.378576   46274 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0802 18:29:52.824492   46274 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0802 18:29:52.824550   46274 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0802 18:29:52.824628   46274 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0802 18:29:53.569206   46274 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0802 18:29:53.569263   46274 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0802 18:29:53.569332   46274 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0802 18:29:54.413192   46274 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0802 18:29:54.413243   46274 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0802 18:29:54.413288   46274 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0802 18:29:56.561092   46274 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.147781901s)
	I0802 18:29:56.561127   46274 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0802 18:29:56.561155   46274 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0802 18:29:56.561216   46274 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0802 18:29:57.309841   46274 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0802 18:29:57.309887   46274 cache_images.go:123] Successfully loaded all cached images
	I0802 18:29:57.309894   46274 cache_images.go:92] duration metric: took 8.468025111s to LoadCachedImages
	I0802 18:29:57.309908   46274 kubeadm.go:934] updating node { 192.168.39.115 8443 v1.24.4 crio true true} ...
	I0802 18:29:57.310025   46274 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-999194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-999194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:29:57.310112   46274 ssh_runner.go:195] Run: crio config
	I0802 18:29:57.355618   46274 cni.go:84] Creating CNI manager for ""
	I0802 18:29:57.355645   46274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:29:57.355662   46274 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:29:57.355687   46274 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.115 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-999194 NodeName:test-preload-999194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 18:29:57.355873   46274 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-999194"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:29:57.355955   46274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0802 18:29:57.365757   46274 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:29:57.365822   46274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:29:57.374676   46274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0802 18:29:57.389641   46274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:29:57.404441   46274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0802 18:29:57.419580   46274 ssh_runner.go:195] Run: grep 192.168.39.115	control-plane.minikube.internal$ /etc/hosts
	I0802 18:29:57.423077   46274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:29:57.434063   46274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:29:57.551634   46274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:29:57.568609   46274 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194 for IP: 192.168.39.115
	I0802 18:29:57.568640   46274 certs.go:194] generating shared ca certs ...
	I0802 18:29:57.568660   46274 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:29:57.568846   46274 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:29:57.568901   46274 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:29:57.568915   46274 certs.go:256] generating profile certs ...
	I0802 18:29:57.569023   46274 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/client.key
	I0802 18:29:57.569104   46274 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/apiserver.key.9d80d783
	I0802 18:29:57.569183   46274 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/proxy-client.key
	I0802 18:29:57.569340   46274 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:29:57.569375   46274 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:29:57.569385   46274 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:29:57.569418   46274 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:29:57.569475   46274 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:29:57.569510   46274 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:29:57.569578   46274 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:29:57.570223   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:29:57.593791   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:29:57.618317   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:29:57.649402   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:29:57.682191   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0802 18:29:57.711064   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 18:29:57.742397   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:29:57.779012   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 18:29:57.800920   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:29:57.821952   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:29:57.842945   46274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:29:57.864481   46274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:29:57.879767   46274 ssh_runner.go:195] Run: openssl version
	I0802 18:29:57.884954   46274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:29:57.894844   46274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:29:57.898788   46274 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:29:57.898835   46274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:29:57.904103   46274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:29:57.914074   46274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:29:57.923917   46274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:29:57.927938   46274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:29:57.927998   46274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:29:57.933120   46274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:29:57.942871   46274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:29:57.952738   46274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:29:57.956649   46274 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:29:57.956697   46274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:29:57.961718   46274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:29:57.972145   46274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:29:57.976146   46274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 18:29:57.981493   46274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 18:29:57.986612   46274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 18:29:57.991828   46274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 18:29:57.997112   46274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 18:29:58.002441   46274 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 18:29:58.007662   46274 kubeadm.go:392] StartCluster: {Name:test-preload-999194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-999194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:29:58.007738   46274 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:29:58.007778   46274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:29:58.043052   46274 cri.go:89] found id: ""
	I0802 18:29:58.043142   46274 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 18:29:58.052661   46274 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0802 18:29:58.052677   46274 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0802 18:29:58.052718   46274 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0802 18:29:58.061721   46274 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0802 18:29:58.062155   46274 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-999194" does not appear in /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:29:58.062276   46274 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-5397/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-999194" cluster setting kubeconfig missing "test-preload-999194" context setting]
	I0802 18:29:58.062560   46274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:29:58.063150   46274 kapi.go:59] client config for test-preload-999194: &rest.Config{Host:"https://192.168.39.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/client.key", CAFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 18:29:58.063730   46274 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0802 18:29:58.072701   46274 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.115
	I0802 18:29:58.072730   46274 kubeadm.go:1160] stopping kube-system containers ...
	I0802 18:29:58.072742   46274 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0802 18:29:58.072787   46274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:29:58.105517   46274 cri.go:89] found id: ""
	I0802 18:29:58.105600   46274 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0802 18:29:58.122460   46274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:29:58.131647   46274 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:29:58.131671   46274 kubeadm.go:157] found existing configuration files:
	
	I0802 18:29:58.131720   46274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:29:58.140046   46274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:29:58.140104   46274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:29:58.148583   46274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:29:58.156698   46274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:29:58.156749   46274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:29:58.165065   46274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:29:58.173318   46274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:29:58.173376   46274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:29:58.182535   46274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:29:58.190729   46274 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:29:58.190784   46274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:29:58.199323   46274 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 18:29:58.208143   46274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:29:58.300727   46274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:29:59.058285   46274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:29:59.314744   46274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:29:59.370482   46274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:29:59.466143   46274 api_server.go:52] waiting for apiserver process to appear ...
	I0802 18:29:59.466227   46274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:29:59.966330   46274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:30:00.467056   46274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:30:00.484205   46274 api_server.go:72] duration metric: took 1.018059836s to wait for apiserver process to appear ...
	I0802 18:30:00.484235   46274 api_server.go:88] waiting for apiserver healthz status ...
	I0802 18:30:00.484258   46274 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0802 18:30:00.484657   46274 api_server.go:269] stopped: https://192.168.39.115:8443/healthz: Get "https://192.168.39.115:8443/healthz": dial tcp 192.168.39.115:8443: connect: connection refused
	I0802 18:30:00.985318   46274 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0802 18:30:04.060429   46274 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0802 18:30:04.060470   46274 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0802 18:30:04.060484   46274 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0802 18:30:04.108003   46274 api_server.go:279] https://192.168.39.115:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0802 18:30:04.108026   46274 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0802 18:30:04.484507   46274 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0802 18:30:04.491384   46274 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 18:30:04.491421   46274 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 18:30:04.984506   46274 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0802 18:30:04.990352   46274 api_server.go:279] https://192.168.39.115:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 18:30:04.990378   46274 api_server.go:103] status: https://192.168.39.115:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 18:30:05.485016   46274 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0802 18:30:05.490057   46274 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0802 18:30:05.496594   46274 api_server.go:141] control plane version: v1.24.4
	I0802 18:30:05.496616   46274 api_server.go:131] duration metric: took 5.012371081s to wait for apiserver health ...
	I0802 18:30:05.496625   46274 cni.go:84] Creating CNI manager for ""
	I0802 18:30:05.496631   46274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:30:05.498506   46274 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 18:30:05.499703   46274 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 18:30:05.510656   46274 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 18:30:05.528054   46274 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 18:30:05.536609   46274 system_pods.go:59] 8 kube-system pods found
	I0802 18:30:05.536634   46274 system_pods.go:61] "coredns-6d4b75cb6d-8qx2q" [cb7c722d-254f-4d29-acbe-2222dd2c5dfa] Running
	I0802 18:30:05.536639   46274 system_pods.go:61] "coredns-6d4b75cb6d-tjqpt" [f0b3dd02-2d58-42a7-8e5d-83154809d967] Running
	I0802 18:30:05.536650   46274 system_pods.go:61] "etcd-test-preload-999194" [4fb8f694-e293-47f6-af96-252dfc1536bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0802 18:30:05.536654   46274 system_pods.go:61] "kube-apiserver-test-preload-999194" [4a24fa6c-569f-487e-8300-3d9adedda01a] Running
	I0802 18:30:05.536663   46274 system_pods.go:61] "kube-controller-manager-test-preload-999194" [a86b5737-c5ed-4383-bdc6-414c9f516cd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0802 18:30:05.536672   46274 system_pods.go:61] "kube-proxy-fsnhj" [89c5222b-08e4-465d-9644-b207b5f25bd9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0802 18:30:05.536676   46274 system_pods.go:61] "kube-scheduler-test-preload-999194" [92aee063-45d4-4a3e-8ac7-5dffff8b81ef] Running
	I0802 18:30:05.536679   46274 system_pods.go:61] "storage-provisioner" [fbd41024-0758-4dcb-b42c-b1afe6ac9dc3] Running
	I0802 18:30:05.536684   46274 system_pods.go:74] duration metric: took 8.614535ms to wait for pod list to return data ...
	I0802 18:30:05.536691   46274 node_conditions.go:102] verifying NodePressure condition ...
	I0802 18:30:05.541096   46274 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 18:30:05.541124   46274 node_conditions.go:123] node cpu capacity is 2
	I0802 18:30:05.541134   46274 node_conditions.go:105] duration metric: took 4.438823ms to run NodePressure ...
	I0802 18:30:05.541149   46274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:30:05.781786   46274 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0802 18:30:05.786092   46274 kubeadm.go:739] kubelet initialised
	I0802 18:30:05.786113   46274 kubeadm.go:740] duration metric: took 4.300511ms waiting for restarted kubelet to initialise ...
	I0802 18:30:05.786120   46274 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 18:30:05.796240   46274 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-8qx2q" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:05.805223   46274 pod_ready.go:97] node "test-preload-999194" hosting pod "coredns-6d4b75cb6d-8qx2q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:05.805249   46274 pod_ready.go:81] duration metric: took 8.982191ms for pod "coredns-6d4b75cb6d-8qx2q" in "kube-system" namespace to be "Ready" ...
	E0802 18:30:05.805258   46274 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-999194" hosting pod "coredns-6d4b75cb6d-8qx2q" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:05.805265   46274 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-tjqpt" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:05.814941   46274 pod_ready.go:97] node "test-preload-999194" hosting pod "coredns-6d4b75cb6d-tjqpt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:05.814963   46274 pod_ready.go:81] duration metric: took 9.686987ms for pod "coredns-6d4b75cb6d-tjqpt" in "kube-system" namespace to be "Ready" ...
	E0802 18:30:05.814971   46274 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-999194" hosting pod "coredns-6d4b75cb6d-tjqpt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:05.814976   46274 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:05.821068   46274 pod_ready.go:97] node "test-preload-999194" hosting pod "etcd-test-preload-999194" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:05.821088   46274 pod_ready.go:81] duration metric: took 6.103934ms for pod "etcd-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	E0802 18:30:05.821096   46274 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-999194" hosting pod "etcd-test-preload-999194" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:05.821101   46274 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:05.931187   46274 pod_ready.go:97] node "test-preload-999194" hosting pod "kube-apiserver-test-preload-999194" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:05.931210   46274 pod_ready.go:81] duration metric: took 110.101263ms for pod "kube-apiserver-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	E0802 18:30:05.931220   46274 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-999194" hosting pod "kube-apiserver-test-preload-999194" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:05.931226   46274 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:06.331274   46274 pod_ready.go:97] node "test-preload-999194" hosting pod "kube-controller-manager-test-preload-999194" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:06.331321   46274 pod_ready.go:81] duration metric: took 400.086547ms for pod "kube-controller-manager-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	E0802 18:30:06.331330   46274 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-999194" hosting pod "kube-controller-manager-test-preload-999194" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:06.331336   46274 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fsnhj" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:06.731754   46274 pod_ready.go:97] node "test-preload-999194" hosting pod "kube-proxy-fsnhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:06.731784   46274 pod_ready.go:81] duration metric: took 400.438752ms for pod "kube-proxy-fsnhj" in "kube-system" namespace to be "Ready" ...
	E0802 18:30:06.731795   46274 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-999194" hosting pod "kube-proxy-fsnhj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:06.731809   46274 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:07.132375   46274 pod_ready.go:97] node "test-preload-999194" hosting pod "kube-scheduler-test-preload-999194" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:07.132400   46274 pod_ready.go:81] duration metric: took 400.583599ms for pod "kube-scheduler-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	E0802 18:30:07.132409   46274 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-999194" hosting pod "kube-scheduler-test-preload-999194" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:07.132416   46274 pod_ready.go:38] duration metric: took 1.346287515s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 18:30:07.132431   46274 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 18:30:07.144387   46274 ops.go:34] apiserver oom_adj: -16
	I0802 18:30:07.144406   46274 kubeadm.go:597] duration metric: took 9.091723416s to restartPrimaryControlPlane
	I0802 18:30:07.144414   46274 kubeadm.go:394] duration metric: took 9.13675659s to StartCluster
	I0802 18:30:07.144430   46274 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:30:07.144493   46274 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:30:07.145126   46274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:30:07.145345   46274 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.115 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 18:30:07.145412   46274 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 18:30:07.145481   46274 addons.go:69] Setting storage-provisioner=true in profile "test-preload-999194"
	I0802 18:30:07.145493   46274 addons.go:69] Setting default-storageclass=true in profile "test-preload-999194"
	I0802 18:30:07.145513   46274 addons.go:234] Setting addon storage-provisioner=true in "test-preload-999194"
	W0802 18:30:07.145523   46274 addons.go:243] addon storage-provisioner should already be in state true
	I0802 18:30:07.145524   46274 config.go:182] Loaded profile config "test-preload-999194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0802 18:30:07.145527   46274 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-999194"
	I0802 18:30:07.145558   46274 host.go:66] Checking if "test-preload-999194" exists ...
	I0802 18:30:07.145847   46274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:30:07.145851   46274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:30:07.145891   46274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:30:07.145918   46274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:30:07.146927   46274 out.go:177] * Verifying Kubernetes components...
	I0802 18:30:07.148321   46274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:30:07.161169   46274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0802 18:30:07.161235   46274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44883
	I0802 18:30:07.161590   46274 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:30:07.161633   46274 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:30:07.162088   46274 main.go:141] libmachine: Using API Version  1
	I0802 18:30:07.162108   46274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:30:07.162195   46274 main.go:141] libmachine: Using API Version  1
	I0802 18:30:07.162218   46274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:30:07.162477   46274 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:30:07.162514   46274 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:30:07.162674   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetState
	I0802 18:30:07.163114   46274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:30:07.163157   46274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:30:07.165164   46274 kapi.go:59] client config for test-preload-999194: &rest.Config{Host:"https://192.168.39.115:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/test-preload-999194/client.key", CAFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 18:30:07.165484   46274 addons.go:234] Setting addon default-storageclass=true in "test-preload-999194"
	W0802 18:30:07.165501   46274 addons.go:243] addon default-storageclass should already be in state true
	I0802 18:30:07.165526   46274 host.go:66] Checking if "test-preload-999194" exists ...
	I0802 18:30:07.165873   46274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:30:07.165913   46274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:30:07.180443   46274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39257
	I0802 18:30:07.181004   46274 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:30:07.181548   46274 main.go:141] libmachine: Using API Version  1
	I0802 18:30:07.181567   46274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:30:07.181840   46274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I0802 18:30:07.181917   46274 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:30:07.182198   46274 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:30:07.182532   46274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:30:07.182597   46274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:30:07.182731   46274 main.go:141] libmachine: Using API Version  1
	I0802 18:30:07.182757   46274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:30:07.183132   46274 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:30:07.183360   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetState
	I0802 18:30:07.185150   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	I0802 18:30:07.187268   46274 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:30:07.188610   46274 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 18:30:07.188629   46274 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 18:30:07.188655   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:30:07.191913   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:30:07.192336   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:30:07.192375   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:30:07.192652   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHPort
	I0802 18:30:07.192884   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:30:07.193038   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHUsername
	I0802 18:30:07.193235   46274 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/test-preload-999194/id_rsa Username:docker}
	I0802 18:30:07.199472   46274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41847
	I0802 18:30:07.199967   46274 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:30:07.200553   46274 main.go:141] libmachine: Using API Version  1
	I0802 18:30:07.200575   46274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:30:07.200940   46274 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:30:07.201166   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetState
	I0802 18:30:07.202863   46274 main.go:141] libmachine: (test-preload-999194) Calling .DriverName
	I0802 18:30:07.203188   46274 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 18:30:07.203205   46274 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 18:30:07.203225   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHHostname
	I0802 18:30:07.206568   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:30:07.206981   46274 main.go:141] libmachine: (test-preload-999194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:99:f0", ip: ""} in network mk-test-preload-999194: {Iface:virbr1 ExpiryTime:2024-08-02 19:29:35 +0000 UTC Type:0 Mac:52:54:00:c4:99:f0 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:test-preload-999194 Clientid:01:52:54:00:c4:99:f0}
	I0802 18:30:07.207002   46274 main.go:141] libmachine: (test-preload-999194) DBG | domain test-preload-999194 has defined IP address 192.168.39.115 and MAC address 52:54:00:c4:99:f0 in network mk-test-preload-999194
	I0802 18:30:07.207214   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHPort
	I0802 18:30:07.207449   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHKeyPath
	I0802 18:30:07.207623   46274 main.go:141] libmachine: (test-preload-999194) Calling .GetSSHUsername
	I0802 18:30:07.207802   46274 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/test-preload-999194/id_rsa Username:docker}
	I0802 18:30:07.312439   46274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:30:07.329351   46274 node_ready.go:35] waiting up to 6m0s for node "test-preload-999194" to be "Ready" ...
	I0802 18:30:07.400469   46274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 18:30:07.401925   46274 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 18:30:08.391999   46274 main.go:141] libmachine: Making call to close driver server
	I0802 18:30:08.392023   46274 main.go:141] libmachine: (test-preload-999194) Calling .Close
	I0802 18:30:08.392089   46274 main.go:141] libmachine: Making call to close driver server
	I0802 18:30:08.392110   46274 main.go:141] libmachine: (test-preload-999194) Calling .Close
	I0802 18:30:08.392339   46274 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:30:08.392389   46274 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:30:08.392403   46274 main.go:141] libmachine: Making call to close driver server
	I0802 18:30:08.392412   46274 main.go:141] libmachine: (test-preload-999194) Calling .Close
	I0802 18:30:08.392511   46274 main.go:141] libmachine: (test-preload-999194) DBG | Closing plugin on server side
	I0802 18:30:08.392526   46274 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:30:08.392552   46274 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:30:08.392565   46274 main.go:141] libmachine: Making call to close driver server
	I0802 18:30:08.392582   46274 main.go:141] libmachine: (test-preload-999194) Calling .Close
	I0802 18:30:08.392615   46274 main.go:141] libmachine: (test-preload-999194) DBG | Closing plugin on server side
	I0802 18:30:08.392652   46274 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:30:08.392662   46274 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:30:08.392963   46274 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:30:08.392977   46274 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:30:08.399752   46274 main.go:141] libmachine: Making call to close driver server
	I0802 18:30:08.399767   46274 main.go:141] libmachine: (test-preload-999194) Calling .Close
	I0802 18:30:08.400002   46274 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:30:08.400019   46274 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:30:08.401909   46274 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0802 18:30:08.403213   46274 addons.go:510] duration metric: took 1.257813791s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0802 18:30:09.334286   46274 node_ready.go:53] node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:11.833613   46274 node_ready.go:53] node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:13.833645   46274 node_ready.go:53] node "test-preload-999194" has status "Ready":"False"
	I0802 18:30:14.333103   46274 node_ready.go:49] node "test-preload-999194" has status "Ready":"True"
	I0802 18:30:14.333125   46274 node_ready.go:38] duration metric: took 7.003739062s for node "test-preload-999194" to be "Ready" ...
	I0802 18:30:14.333134   46274 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 18:30:14.337873   46274 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-tjqpt" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:14.342855   46274 pod_ready.go:92] pod "coredns-6d4b75cb6d-tjqpt" in "kube-system" namespace has status "Ready":"True"
	I0802 18:30:14.342873   46274 pod_ready.go:81] duration metric: took 4.976342ms for pod "coredns-6d4b75cb6d-tjqpt" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:14.342881   46274 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:14.347647   46274 pod_ready.go:92] pod "etcd-test-preload-999194" in "kube-system" namespace has status "Ready":"True"
	I0802 18:30:14.347664   46274 pod_ready.go:81] duration metric: took 4.77829ms for pod "etcd-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:14.347672   46274 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:16.353198   46274 pod_ready.go:102] pod "kube-apiserver-test-preload-999194" in "kube-system" namespace has status "Ready":"False"
	I0802 18:30:18.353854   46274 pod_ready.go:102] pod "kube-apiserver-test-preload-999194" in "kube-system" namespace has status "Ready":"False"
	I0802 18:30:20.354056   46274 pod_ready.go:92] pod "kube-apiserver-test-preload-999194" in "kube-system" namespace has status "Ready":"True"
	I0802 18:30:20.354081   46274 pod_ready.go:81] duration metric: took 6.006403013s for pod "kube-apiserver-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:20.354091   46274 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:20.358906   46274 pod_ready.go:92] pod "kube-controller-manager-test-preload-999194" in "kube-system" namespace has status "Ready":"True"
	I0802 18:30:20.358927   46274 pod_ready.go:81] duration metric: took 4.830219ms for pod "kube-controller-manager-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:20.358938   46274 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fsnhj" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:20.363077   46274 pod_ready.go:92] pod "kube-proxy-fsnhj" in "kube-system" namespace has status "Ready":"True"
	I0802 18:30:20.363115   46274 pod_ready.go:81] duration metric: took 4.155601ms for pod "kube-proxy-fsnhj" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:20.363129   46274 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:20.367486   46274 pod_ready.go:92] pod "kube-scheduler-test-preload-999194" in "kube-system" namespace has status "Ready":"True"
	I0802 18:30:20.367508   46274 pod_ready.go:81] duration metric: took 4.370972ms for pod "kube-scheduler-test-preload-999194" in "kube-system" namespace to be "Ready" ...
	I0802 18:30:20.367520   46274 pod_ready.go:38] duration metric: took 6.034376192s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 18:30:20.367534   46274 api_server.go:52] waiting for apiserver process to appear ...
	I0802 18:30:20.367589   46274 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:30:20.382477   46274 api_server.go:72] duration metric: took 13.237096995s to wait for apiserver process to appear ...
	I0802 18:30:20.382500   46274 api_server.go:88] waiting for apiserver healthz status ...
	I0802 18:30:20.382515   46274 api_server.go:253] Checking apiserver healthz at https://192.168.39.115:8443/healthz ...
	I0802 18:30:20.387750   46274 api_server.go:279] https://192.168.39.115:8443/healthz returned 200:
	ok
	I0802 18:30:20.388580   46274 api_server.go:141] control plane version: v1.24.4
	I0802 18:30:20.388603   46274 api_server.go:131] duration metric: took 6.096083ms to wait for apiserver health ...
	I0802 18:30:20.388613   46274 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 18:30:20.393587   46274 system_pods.go:59] 7 kube-system pods found
	I0802 18:30:20.393609   46274 system_pods.go:61] "coredns-6d4b75cb6d-tjqpt" [f0b3dd02-2d58-42a7-8e5d-83154809d967] Running
	I0802 18:30:20.393613   46274 system_pods.go:61] "etcd-test-preload-999194" [4fb8f694-e293-47f6-af96-252dfc1536bc] Running
	I0802 18:30:20.393617   46274 system_pods.go:61] "kube-apiserver-test-preload-999194" [4a24fa6c-569f-487e-8300-3d9adedda01a] Running
	I0802 18:30:20.393621   46274 system_pods.go:61] "kube-controller-manager-test-preload-999194" [a86b5737-c5ed-4383-bdc6-414c9f516cd7] Running
	I0802 18:30:20.393624   46274 system_pods.go:61] "kube-proxy-fsnhj" [89c5222b-08e4-465d-9644-b207b5f25bd9] Running
	I0802 18:30:20.393627   46274 system_pods.go:61] "kube-scheduler-test-preload-999194" [92aee063-45d4-4a3e-8ac7-5dffff8b81ef] Running
	I0802 18:30:20.393630   46274 system_pods.go:61] "storage-provisioner" [fbd41024-0758-4dcb-b42c-b1afe6ac9dc3] Running
	I0802 18:30:20.393636   46274 system_pods.go:74] duration metric: took 5.017185ms to wait for pod list to return data ...
	I0802 18:30:20.393659   46274 default_sa.go:34] waiting for default service account to be created ...
	I0802 18:30:20.395720   46274 default_sa.go:45] found service account: "default"
	I0802 18:30:20.395741   46274 default_sa.go:55] duration metric: took 2.071375ms for default service account to be created ...
	I0802 18:30:20.395749   46274 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 18:30:20.554505   46274 system_pods.go:86] 7 kube-system pods found
	I0802 18:30:20.554533   46274 system_pods.go:89] "coredns-6d4b75cb6d-tjqpt" [f0b3dd02-2d58-42a7-8e5d-83154809d967] Running
	I0802 18:30:20.554538   46274 system_pods.go:89] "etcd-test-preload-999194" [4fb8f694-e293-47f6-af96-252dfc1536bc] Running
	I0802 18:30:20.554542   46274 system_pods.go:89] "kube-apiserver-test-preload-999194" [4a24fa6c-569f-487e-8300-3d9adedda01a] Running
	I0802 18:30:20.554547   46274 system_pods.go:89] "kube-controller-manager-test-preload-999194" [a86b5737-c5ed-4383-bdc6-414c9f516cd7] Running
	I0802 18:30:20.554551   46274 system_pods.go:89] "kube-proxy-fsnhj" [89c5222b-08e4-465d-9644-b207b5f25bd9] Running
	I0802 18:30:20.554554   46274 system_pods.go:89] "kube-scheduler-test-preload-999194" [92aee063-45d4-4a3e-8ac7-5dffff8b81ef] Running
	I0802 18:30:20.554558   46274 system_pods.go:89] "storage-provisioner" [fbd41024-0758-4dcb-b42c-b1afe6ac9dc3] Running
	I0802 18:30:20.554563   46274 system_pods.go:126] duration metric: took 158.808916ms to wait for k8s-apps to be running ...
	I0802 18:30:20.554572   46274 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 18:30:20.554624   46274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:30:20.568368   46274 system_svc.go:56] duration metric: took 13.78654ms WaitForService to wait for kubelet
	I0802 18:30:20.568412   46274 kubeadm.go:582] duration metric: took 13.423034194s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:30:20.568450   46274 node_conditions.go:102] verifying NodePressure condition ...
	I0802 18:30:20.752063   46274 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 18:30:20.752093   46274 node_conditions.go:123] node cpu capacity is 2
	I0802 18:30:20.752106   46274 node_conditions.go:105] duration metric: took 183.647953ms to run NodePressure ...
	I0802 18:30:20.752121   46274 start.go:241] waiting for startup goroutines ...
	I0802 18:30:20.752130   46274 start.go:246] waiting for cluster config update ...
	I0802 18:30:20.752139   46274 start.go:255] writing updated cluster config ...
	I0802 18:30:20.752380   46274 ssh_runner.go:195] Run: rm -f paused
	I0802 18:30:20.797317   46274 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0802 18:30:20.799281   46274 out.go:177] 
	W0802 18:30:20.800478   46274 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0802 18:30:20.801596   46274 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0802 18:30:20.802662   46274 out.go:177] * Done! kubectl is now configured to use "test-preload-999194" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.645248604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623421645224321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8987fb6e-2cf3-407c-97b0-7a69da87ea64 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.645786611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb343724-c91c-45ee-b91e-e10fe55fbd6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.645870812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb343724-c91c-45ee-b91e-e10fe55fbd6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.646056691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:baf80821b87eff46b02ad03480470855090d9dc627243fd60ad8cbb1979ede3f,PodSandboxId:080e6363fdbe18f380c39309a0233af626c9d1e2f2ab7172ed5bb9eb2d1dd9c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722623412844940942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tjqpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b3dd02-2d58-42a7-8e5d-83154809d967,},Annotations:map[string]string{io.kubernetes.container.hash: ace77049,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2624c79de55e18d9f75c5815cb6459e5de8ebe6eb7e686b50fd04b7ee6debad0,PodSandboxId:0fe1d397547bcf9d3f57aaf4b7f3b7e571322955ad2cf4bd67b5b3bdf5e460b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722623405869950077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fsnhj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 89c5222b-08e4-465d-9644-b207b5f25bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9f44e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9b8859436f511c882695be45eff295bc7738b573491d167858731d194ef3f6,PodSandboxId:0be7165e810688f5a3d6f285875d9d43a194b6c037ca5300a06909f80959c589,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722623405806056972,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
d41024-0758-4dcb-b42c-b1afe6ac9dc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1970e45,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d97f592f8d7a3541f099e8bc3b33acaa1f482592625e475a39f077adf177bbe,PodSandboxId:d142d5db36bb9865a7ae52151a8e427eab6bd946c379b1a1cac5a21c95c5e801,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722623400233001598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dabc2cae16befcc25b7d3ed8bd0e02ef,},Annot
ations:map[string]string{io.kubernetes.container.hash: e8c40c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b06ecce1db30bdec4cd2e712be8f31fcdfa66e374bcd54b09139a687a75a0fb,PodSandboxId:555a5b4c8c86031a9011c309146165b4d01ba30ad825162e4072b621b472ac1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722623400180461009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 957665d366e4b37087fb6e5d65cc4d79,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e6495b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d519dd36cf84fd662951cd356ed0b1ab34be379472175726f9d139bad260f5,PodSandboxId:cf8e1abe044bbf873d345f5a77eae98e575be668486a5fdf2792d24da3701baf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722623400095868816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2511f711fef33d4660db5baa8088c92d,},An
notations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7eb54de9db8a4a8a3ab8398a7d15f14a4e5cd266d9eb40c5cc7c60bee3abb8d,PodSandboxId:c595016bf617378833e54e68ec970b936cb29c92e9f21f788e63577e3ca9aecc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722623400131069099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448b4fcb8a3a8264295293f1007b8af5,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb343724-c91c-45ee-b91e-e10fe55fbd6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.679520311Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec9e2cdc-896b-4462-8725-8a24ad3b3935 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.679632735Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec9e2cdc-896b-4462-8725-8a24ad3b3935 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.680896614Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39e20942-8f7d-4801-88d4-1ba230cc7965 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.681533541Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623421681482108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39e20942-8f7d-4801-88d4-1ba230cc7965 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.682048364Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef685fb4-0760-495d-add1-51ad7a649ab5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.682111982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef685fb4-0760-495d-add1-51ad7a649ab5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.682396527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:baf80821b87eff46b02ad03480470855090d9dc627243fd60ad8cbb1979ede3f,PodSandboxId:080e6363fdbe18f380c39309a0233af626c9d1e2f2ab7172ed5bb9eb2d1dd9c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722623412844940942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tjqpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b3dd02-2d58-42a7-8e5d-83154809d967,},Annotations:map[string]string{io.kubernetes.container.hash: ace77049,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2624c79de55e18d9f75c5815cb6459e5de8ebe6eb7e686b50fd04b7ee6debad0,PodSandboxId:0fe1d397547bcf9d3f57aaf4b7f3b7e571322955ad2cf4bd67b5b3bdf5e460b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722623405869950077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fsnhj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 89c5222b-08e4-465d-9644-b207b5f25bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9f44e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9b8859436f511c882695be45eff295bc7738b573491d167858731d194ef3f6,PodSandboxId:0be7165e810688f5a3d6f285875d9d43a194b6c037ca5300a06909f80959c589,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722623405806056972,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
d41024-0758-4dcb-b42c-b1afe6ac9dc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1970e45,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d97f592f8d7a3541f099e8bc3b33acaa1f482592625e475a39f077adf177bbe,PodSandboxId:d142d5db36bb9865a7ae52151a8e427eab6bd946c379b1a1cac5a21c95c5e801,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722623400233001598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dabc2cae16befcc25b7d3ed8bd0e02ef,},Annot
ations:map[string]string{io.kubernetes.container.hash: e8c40c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b06ecce1db30bdec4cd2e712be8f31fcdfa66e374bcd54b09139a687a75a0fb,PodSandboxId:555a5b4c8c86031a9011c309146165b4d01ba30ad825162e4072b621b472ac1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722623400180461009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 957665d366e4b37087fb6e5d65cc4d79,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e6495b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d519dd36cf84fd662951cd356ed0b1ab34be379472175726f9d139bad260f5,PodSandboxId:cf8e1abe044bbf873d345f5a77eae98e575be668486a5fdf2792d24da3701baf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722623400095868816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2511f711fef33d4660db5baa8088c92d,},An
notations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7eb54de9db8a4a8a3ab8398a7d15f14a4e5cd266d9eb40c5cc7c60bee3abb8d,PodSandboxId:c595016bf617378833e54e68ec970b936cb29c92e9f21f788e63577e3ca9aecc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722623400131069099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448b4fcb8a3a8264295293f1007b8af5,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef685fb4-0760-495d-add1-51ad7a649ab5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.722768192Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4a1644d-1102-47c9-b022-da2f2bf70c63 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.722888777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4a1644d-1102-47c9-b022-da2f2bf70c63 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.723888514Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fcbc9fcd-b39d-4aa6-8c9f-52863172f2a9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.724340148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623421724318792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fcbc9fcd-b39d-4aa6-8c9f-52863172f2a9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.724992178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc88df01-2bfe-4665-8c8e-d1a450012c16 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.725057492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc88df01-2bfe-4665-8c8e-d1a450012c16 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.725254746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:baf80821b87eff46b02ad03480470855090d9dc627243fd60ad8cbb1979ede3f,PodSandboxId:080e6363fdbe18f380c39309a0233af626c9d1e2f2ab7172ed5bb9eb2d1dd9c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722623412844940942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tjqpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b3dd02-2d58-42a7-8e5d-83154809d967,},Annotations:map[string]string{io.kubernetes.container.hash: ace77049,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2624c79de55e18d9f75c5815cb6459e5de8ebe6eb7e686b50fd04b7ee6debad0,PodSandboxId:0fe1d397547bcf9d3f57aaf4b7f3b7e571322955ad2cf4bd67b5b3bdf5e460b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722623405869950077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fsnhj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 89c5222b-08e4-465d-9644-b207b5f25bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9f44e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9b8859436f511c882695be45eff295bc7738b573491d167858731d194ef3f6,PodSandboxId:0be7165e810688f5a3d6f285875d9d43a194b6c037ca5300a06909f80959c589,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722623405806056972,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
d41024-0758-4dcb-b42c-b1afe6ac9dc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1970e45,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d97f592f8d7a3541f099e8bc3b33acaa1f482592625e475a39f077adf177bbe,PodSandboxId:d142d5db36bb9865a7ae52151a8e427eab6bd946c379b1a1cac5a21c95c5e801,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722623400233001598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dabc2cae16befcc25b7d3ed8bd0e02ef,},Annot
ations:map[string]string{io.kubernetes.container.hash: e8c40c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b06ecce1db30bdec4cd2e712be8f31fcdfa66e374bcd54b09139a687a75a0fb,PodSandboxId:555a5b4c8c86031a9011c309146165b4d01ba30ad825162e4072b621b472ac1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722623400180461009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 957665d366e4b37087fb6e5d65cc4d79,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e6495b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d519dd36cf84fd662951cd356ed0b1ab34be379472175726f9d139bad260f5,PodSandboxId:cf8e1abe044bbf873d345f5a77eae98e575be668486a5fdf2792d24da3701baf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722623400095868816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2511f711fef33d4660db5baa8088c92d,},An
notations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7eb54de9db8a4a8a3ab8398a7d15f14a4e5cd266d9eb40c5cc7c60bee3abb8d,PodSandboxId:c595016bf617378833e54e68ec970b936cb29c92e9f21f788e63577e3ca9aecc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722623400131069099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448b4fcb8a3a8264295293f1007b8af5,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc88df01-2bfe-4665-8c8e-d1a450012c16 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.757656366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc5995fd-26f9-47af-a6a9-bd0c1fa83fb5 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.757740718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc5995fd-26f9-47af-a6a9-bd0c1fa83fb5 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.758791673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2136aeb6-8843-482e-9a39-4830efab2a40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.759338186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623421759311943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2136aeb6-8843-482e-9a39-4830efab2a40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.760075246Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3bcc668f-b359-4374-a414-03b37a5aff35 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.760137111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3bcc668f-b359-4374-a414-03b37a5aff35 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:30:21 test-preload-999194 crio[695]: time="2024-08-02 18:30:21.760315769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:baf80821b87eff46b02ad03480470855090d9dc627243fd60ad8cbb1979ede3f,PodSandboxId:080e6363fdbe18f380c39309a0233af626c9d1e2f2ab7172ed5bb9eb2d1dd9c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722623412844940942,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tjqpt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0b3dd02-2d58-42a7-8e5d-83154809d967,},Annotations:map[string]string{io.kubernetes.container.hash: ace77049,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2624c79de55e18d9f75c5815cb6459e5de8ebe6eb7e686b50fd04b7ee6debad0,PodSandboxId:0fe1d397547bcf9d3f57aaf4b7f3b7e571322955ad2cf4bd67b5b3bdf5e460b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722623405869950077,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fsnhj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 89c5222b-08e4-465d-9644-b207b5f25bd9,},Annotations:map[string]string{io.kubernetes.container.hash: 65c9f44e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa9b8859436f511c882695be45eff295bc7738b573491d167858731d194ef3f6,PodSandboxId:0be7165e810688f5a3d6f285875d9d43a194b6c037ca5300a06909f80959c589,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722623405806056972,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb
d41024-0758-4dcb-b42c-b1afe6ac9dc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1970e45,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d97f592f8d7a3541f099e8bc3b33acaa1f482592625e475a39f077adf177bbe,PodSandboxId:d142d5db36bb9865a7ae52151a8e427eab6bd946c379b1a1cac5a21c95c5e801,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722623400233001598,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dabc2cae16befcc25b7d3ed8bd0e02ef,},Annot
ations:map[string]string{io.kubernetes.container.hash: e8c40c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b06ecce1db30bdec4cd2e712be8f31fcdfa66e374bcd54b09139a687a75a0fb,PodSandboxId:555a5b4c8c86031a9011c309146165b4d01ba30ad825162e4072b621b472ac1e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722623400180461009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 957665d366e4b37087fb6e5d65cc4d79,},Annotations:map[s
tring]string{io.kubernetes.container.hash: e6495b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d519dd36cf84fd662951cd356ed0b1ab34be379472175726f9d139bad260f5,PodSandboxId:cf8e1abe044bbf873d345f5a77eae98e575be668486a5fdf2792d24da3701baf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722623400095868816,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2511f711fef33d4660db5baa8088c92d,},An
notations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7eb54de9db8a4a8a3ab8398a7d15f14a4e5cd266d9eb40c5cc7c60bee3abb8d,PodSandboxId:c595016bf617378833e54e68ec970b936cb29c92e9f21f788e63577e3ca9aecc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722623400131069099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-999194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 448b4fcb8a3a8264295293f1007b8af5,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3bcc668f-b359-4374-a414-03b37a5aff35 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	baf80821b87ef       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   080e6363fdbe1       coredns-6d4b75cb6d-tjqpt
	2624c79de55e1       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   0fe1d397547bc       kube-proxy-fsnhj
	fa9b8859436f5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   0be7165e81068       storage-provisioner
	1d97f592f8d7a       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   d142d5db36bb9       etcd-test-preload-999194
	4b06ecce1db30       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   555a5b4c8c860       kube-apiserver-test-preload-999194
	b7eb54de9db8a       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   c595016bf6173       kube-scheduler-test-preload-999194
	51d519dd36cf8       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   cf8e1abe044bb       kube-controller-manager-test-preload-999194
	
	
	==> coredns [baf80821b87eff46b02ad03480470855090d9dc627243fd60ad8cbb1979ede3f] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:38824 - 7490 "HINFO IN 4931814493110265357.1365707379602052787. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012055548s
	
	
	==> describe nodes <==
	Name:               test-preload-999194
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-999194
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=test-preload-999194
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T18_28_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:28:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-999194
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:30:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 18:30:14 +0000   Fri, 02 Aug 2024 18:28:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 18:30:14 +0000   Fri, 02 Aug 2024 18:28:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 18:30:14 +0000   Fri, 02 Aug 2024 18:28:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 18:30:14 +0000   Fri, 02 Aug 2024 18:30:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    test-preload-999194
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb1b0a6b1dda429f960a609ca444474a
	  System UUID:                fb1b0a6b-1dda-429f-960a-609ca444474a
	  Boot ID:                    6942697b-a83c-4f60-bb85-737188ffb262
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-tjqpt                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     85s
	  kube-system                 etcd-test-preload-999194                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 kube-apiserver-test-preload-999194             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-controller-manager-test-preload-999194    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-fsnhj                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                 kube-scheduler-test-preload-999194             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s                kubelet          Node test-preload-999194 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                kubelet          Node test-preload-999194 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                kubelet          Node test-preload-999194 status is now: NodeHasSufficientPID
	  Normal  NodeReady                87s                kubelet          Node test-preload-999194 status is now: NodeReady
	  Normal  RegisteredNode           86s                node-controller  Node test-preload-999194 event: Registered Node test-preload-999194 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-999194 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-999194 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-999194 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-999194 event: Registered Node test-preload-999194 in Controller
	
	
	==> dmesg <==
	[Aug 2 18:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051057] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037034] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.699432] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.895503] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.526170] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.885564] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.057900] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050501] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.167000] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.142990] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.272342] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[ +12.727038] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.069079] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.687877] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[Aug 2 18:30] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.806489] systemd-fstab-generator[1717]: Ignoring "noauto" option for root device
	[  +5.449428] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [1d97f592f8d7a3541f099e8bc3b33acaa1f482592625e475a39f077adf177bbe] <==
	{"level":"info","ts":"2024-08-02T18:30:00.634Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"c7abbacde39fb9a4","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-02T18:30:00.636Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-02T18:30:00.636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 switched to configuration voters=(14387798828015139236)"}
	{"level":"info","ts":"2024-08-02T18:30:00.638Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-02T18:30:00.640Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c7abbacde39fb9a4","initial-advertise-peer-urls":["https://192.168.39.115:2380"],"listen-peer-urls":["https://192.168.39.115:2380"],"advertise-client-urls":["https://192.168.39.115:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.115:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-02T18:30:00.640Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-02T18:30:00.638Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"efb3de1b79640a9c","local-member-id":"c7abbacde39fb9a4","added-peer-id":"c7abbacde39fb9a4","added-peer-peer-urls":["https://192.168.39.115:2380"]}
	{"level":"info","ts":"2024-08-02T18:30:00.642Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"efb3de1b79640a9c","local-member-id":"c7abbacde39fb9a4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:30:00.642Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:30:00.639Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-08-02T18:30:00.642Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.115:2380"}
	{"level":"info","ts":"2024-08-02T18:30:01.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-02T18:30:01.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-02T18:30:01.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 received MsgPreVoteResp from c7abbacde39fb9a4 at term 2"}
	{"level":"info","ts":"2024-08-02T18:30:01.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became candidate at term 3"}
	{"level":"info","ts":"2024-08-02T18:30:01.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 received MsgVoteResp from c7abbacde39fb9a4 at term 3"}
	{"level":"info","ts":"2024-08-02T18:30:01.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c7abbacde39fb9a4 became leader at term 3"}
	{"level":"info","ts":"2024-08-02T18:30:01.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c7abbacde39fb9a4 elected leader c7abbacde39fb9a4 at term 3"}
	{"level":"info","ts":"2024-08-02T18:30:01.614Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"c7abbacde39fb9a4","local-member-attributes":"{Name:test-preload-999194 ClientURLs:[https://192.168.39.115:2379]}","request-path":"/0/members/c7abbacde39fb9a4/attributes","cluster-id":"efb3de1b79640a9c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-02T18:30:01.614Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:30:01.618Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:30:01.619Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-02T18:30:01.620Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.115:2379"}
	{"level":"info","ts":"2024-08-02T18:30:01.633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-02T18:30:01.633Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:30:22 up 0 min,  0 users,  load average: 0.40, 0.12, 0.04
	Linux test-preload-999194 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4b06ecce1db30bdec4cd2e712be8f31fcdfa66e374bcd54b09139a687a75a0fb] <==
	I0802 18:30:04.008088       1 naming_controller.go:291] Starting NamingConditionController
	I0802 18:30:04.008373       1 establishing_controller.go:76] Starting EstablishingController
	I0802 18:30:04.008414       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0802 18:30:04.008435       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0802 18:30:04.008465       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0802 18:30:04.031764       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0802 18:30:04.049096       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0802 18:30:04.141218       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0802 18:30:04.181880       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0802 18:30:04.182478       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 18:30:04.182687       1 cache.go:39] Caches are synced for autoregister controller
	I0802 18:30:04.190131       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0802 18:30:04.190547       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 18:30:04.193327       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0802 18:30:04.193373       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 18:30:04.680239       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0802 18:30:04.994499       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 18:30:05.640487       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0802 18:30:05.667050       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0802 18:30:05.726448       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0802 18:30:05.754504       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 18:30:05.762292       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 18:30:06.118275       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0802 18:30:17.297781       1 controller.go:611] quota admission added evaluator for: endpoints
	I0802 18:30:17.317141       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [51d519dd36cf84fd662951cd356ed0b1ab34be379472175726f9d139bad260f5] <==
	I0802 18:30:17.205978       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0802 18:30:17.206592       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0802 18:30:17.207032       1 shared_informer.go:262] Caches are synced for node
	I0802 18:30:17.207069       1 range_allocator.go:173] Starting range CIDR allocator
	I0802 18:30:17.207074       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0802 18:30:17.207080       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0802 18:30:17.213627       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0802 18:30:17.215224       1 shared_informer.go:262] Caches are synced for service account
	I0802 18:30:17.217921       1 shared_informer.go:262] Caches are synced for job
	I0802 18:30:17.220480       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0802 18:30:17.224678       1 shared_informer.go:262] Caches are synced for TTL
	I0802 18:30:17.226926       1 shared_informer.go:262] Caches are synced for GC
	I0802 18:30:17.236582       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0802 18:30:17.285230       1 shared_informer.go:262] Caches are synced for crt configmap
	I0802 18:30:17.287612       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0802 18:30:17.287679       1 shared_informer.go:262] Caches are synced for endpoint
	I0802 18:30:17.308277       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0802 18:30:17.310876       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0802 18:30:17.424541       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 18:30:17.437045       1 shared_informer.go:262] Caches are synced for disruption
	I0802 18:30:17.437080       1 disruption.go:371] Sending events to api server.
	I0802 18:30:17.452338       1 shared_informer.go:262] Caches are synced for resource quota
	I0802 18:30:17.864114       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 18:30:17.889524       1 shared_informer.go:262] Caches are synced for garbage collector
	I0802 18:30:17.889577       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [2624c79de55e18d9f75c5815cb6459e5de8ebe6eb7e686b50fd04b7ee6debad0] <==
	I0802 18:30:06.070446       1 node.go:163] Successfully retrieved node IP: 192.168.39.115
	I0802 18:30:06.070658       1 server_others.go:138] "Detected node IP" address="192.168.39.115"
	I0802 18:30:06.070754       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0802 18:30:06.111786       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0802 18:30:06.111897       1 server_others.go:206] "Using iptables Proxier"
	I0802 18:30:06.112326       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0802 18:30:06.112610       1 server.go:661] "Version info" version="v1.24.4"
	I0802 18:30:06.112701       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:30:06.114079       1 config.go:317] "Starting service config controller"
	I0802 18:30:06.114132       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0802 18:30:06.114161       1 config.go:226] "Starting endpoint slice config controller"
	I0802 18:30:06.114177       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0802 18:30:06.114773       1 config.go:444] "Starting node config controller"
	I0802 18:30:06.114957       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0802 18:30:06.215347       1 shared_informer.go:262] Caches are synced for node config
	I0802 18:30:06.215419       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0802 18:30:06.215429       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [b7eb54de9db8a4a8a3ab8398a7d15f14a4e5cd266d9eb40c5cc7c60bee3abb8d] <==
	I0802 18:30:00.743904       1 serving.go:348] Generated self-signed cert in-memory
	W0802 18:30:04.060391       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 18:30:04.060489       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 18:30:04.060501       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 18:30:04.060508       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 18:30:04.111193       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0802 18:30:04.112877       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:30:04.119242       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0802 18:30:04.119507       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 18:30:04.119551       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 18:30:04.123150       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 18:30:04.223475       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: I0802 18:30:04.504924    1092 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0b3dd02-2d58-42a7-8e5d-83154809d967-config-volume\") pod \"coredns-6d4b75cb6d-tjqpt\" (UID: \"f0b3dd02-2d58-42a7-8e5d-83154809d967\") " pod="kube-system/coredns-6d4b75cb6d-tjqpt"
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: I0802 18:30:04.504949    1092 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxzg\" (UniqueName: \"kubernetes.io/projected/f0b3dd02-2d58-42a7-8e5d-83154809d967-kube-api-access-lsxzg\") pod \"coredns-6d4b75cb6d-tjqpt\" (UID: \"f0b3dd02-2d58-42a7-8e5d-83154809d967\") " pod="kube-system/coredns-6d4b75cb6d-tjqpt"
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: I0802 18:30:04.504969    1092 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89c5222b-08e4-465d-9644-b207b5f25bd9-lib-modules\") pod \"kube-proxy-fsnhj\" (UID: \"89c5222b-08e4-465d-9644-b207b5f25bd9\") " pod="kube-system/kube-proxy-fsnhj"
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: I0802 18:30:04.504994    1092 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbg7q\" (UniqueName: \"kubernetes.io/projected/89c5222b-08e4-465d-9644-b207b5f25bd9-kube-api-access-jbg7q\") pod \"kube-proxy-fsnhj\" (UID: \"89c5222b-08e4-465d-9644-b207b5f25bd9\") " pod="kube-system/kube-proxy-fsnhj"
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: I0802 18:30:04.505038    1092 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzwl\" (UniqueName: \"kubernetes.io/projected/fbd41024-0758-4dcb-b42c-b1afe6ac9dc3-kube-api-access-crzwl\") pod \"storage-provisioner\" (UID: \"fbd41024-0758-4dcb-b42c-b1afe6ac9dc3\") " pod="kube-system/storage-provisioner"
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: I0802 18:30:04.505060    1092 reconciler.go:159] "Reconciler: start to sync state"
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: I0802 18:30:04.925911    1092 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngck5\" (UniqueName: \"kubernetes.io/projected/cb7c722d-254f-4d29-acbe-2222dd2c5dfa-kube-api-access-ngck5\") pod \"cb7c722d-254f-4d29-acbe-2222dd2c5dfa\" (UID: \"cb7c722d-254f-4d29-acbe-2222dd2c5dfa\") "
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: I0802 18:30:04.925977    1092 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb7c722d-254f-4d29-acbe-2222dd2c5dfa-config-volume\") pod \"cb7c722d-254f-4d29-acbe-2222dd2c5dfa\" (UID: \"cb7c722d-254f-4d29-acbe-2222dd2c5dfa\") "
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: W0802 18:30:04.927885    1092 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/cb7c722d-254f-4d29-acbe-2222dd2c5dfa/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: E0802 18:30:04.928216    1092 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: W0802 18:30:04.928252    1092 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/cb7c722d-254f-4d29-acbe-2222dd2c5dfa/volumes/kubernetes.io~projected/kube-api-access-ngck5: clearQuota called, but quotas disabled
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: E0802 18:30:04.928296    1092 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f0b3dd02-2d58-42a7-8e5d-83154809d967-config-volume podName:f0b3dd02-2d58-42a7-8e5d-83154809d967 nodeName:}" failed. No retries permitted until 2024-08-02 18:30:05.428261115 +0000 UTC m=+6.121331312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f0b3dd02-2d58-42a7-8e5d-83154809d967-config-volume") pod "coredns-6d4b75cb6d-tjqpt" (UID: "f0b3dd02-2d58-42a7-8e5d-83154809d967") : object "kube-system"/"coredns" not registered
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: I0802 18:30:04.928454    1092 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7c722d-254f-4d29-acbe-2222dd2c5dfa-kube-api-access-ngck5" (OuterVolumeSpecName: "kube-api-access-ngck5") pod "cb7c722d-254f-4d29-acbe-2222dd2c5dfa" (UID: "cb7c722d-254f-4d29-acbe-2222dd2c5dfa"). InnerVolumeSpecName "kube-api-access-ngck5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 02 18:30:04 test-preload-999194 kubelet[1092]: I0802 18:30:04.929003    1092 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7c722d-254f-4d29-acbe-2222dd2c5dfa-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb7c722d-254f-4d29-acbe-2222dd2c5dfa" (UID: "cb7c722d-254f-4d29-acbe-2222dd2c5dfa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 02 18:30:05 test-preload-999194 kubelet[1092]: I0802 18:30:05.027291    1092 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb7c722d-254f-4d29-acbe-2222dd2c5dfa-config-volume\") on node \"test-preload-999194\" DevicePath \"\""
	Aug 02 18:30:05 test-preload-999194 kubelet[1092]: I0802 18:30:05.027359    1092 reconciler.go:384] "Volume detached for volume \"kube-api-access-ngck5\" (UniqueName: \"kubernetes.io/projected/cb7c722d-254f-4d29-acbe-2222dd2c5dfa-kube-api-access-ngck5\") on node \"test-preload-999194\" DevicePath \"\""
	Aug 02 18:30:05 test-preload-999194 kubelet[1092]: E0802 18:30:05.429298    1092 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 02 18:30:05 test-preload-999194 kubelet[1092]: E0802 18:30:05.429383    1092 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f0b3dd02-2d58-42a7-8e5d-83154809d967-config-volume podName:f0b3dd02-2d58-42a7-8e5d-83154809d967 nodeName:}" failed. No retries permitted until 2024-08-02 18:30:06.429363319 +0000 UTC m=+7.122433514 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f0b3dd02-2d58-42a7-8e5d-83154809d967-config-volume") pod "coredns-6d4b75cb6d-tjqpt" (UID: "f0b3dd02-2d58-42a7-8e5d-83154809d967") : object "kube-system"/"coredns" not registered
	Aug 02 18:30:06 test-preload-999194 kubelet[1092]: E0802 18:30:06.436569    1092 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 02 18:30:06 test-preload-999194 kubelet[1092]: E0802 18:30:06.436652    1092 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f0b3dd02-2d58-42a7-8e5d-83154809d967-config-volume podName:f0b3dd02-2d58-42a7-8e5d-83154809d967 nodeName:}" failed. No retries permitted until 2024-08-02 18:30:08.436635441 +0000 UTC m=+9.129705652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f0b3dd02-2d58-42a7-8e5d-83154809d967-config-volume") pod "coredns-6d4b75cb6d-tjqpt" (UID: "f0b3dd02-2d58-42a7-8e5d-83154809d967") : object "kube-system"/"coredns" not registered
	Aug 02 18:30:06 test-preload-999194 kubelet[1092]: E0802 18:30:06.526349    1092 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-tjqpt" podUID=f0b3dd02-2d58-42a7-8e5d-83154809d967
	Aug 02 18:30:07 test-preload-999194 kubelet[1092]: I0802 18:30:07.531258    1092 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=cb7c722d-254f-4d29-acbe-2222dd2c5dfa path="/var/lib/kubelet/pods/cb7c722d-254f-4d29-acbe-2222dd2c5dfa/volumes"
	Aug 02 18:30:08 test-preload-999194 kubelet[1092]: E0802 18:30:08.451327    1092 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 02 18:30:08 test-preload-999194 kubelet[1092]: E0802 18:30:08.451432    1092 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f0b3dd02-2d58-42a7-8e5d-83154809d967-config-volume podName:f0b3dd02-2d58-42a7-8e5d-83154809d967 nodeName:}" failed. No retries permitted until 2024-08-02 18:30:12.451411863 +0000 UTC m=+13.144482070 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f0b3dd02-2d58-42a7-8e5d-83154809d967-config-volume") pod "coredns-6d4b75cb6d-tjqpt" (UID: "f0b3dd02-2d58-42a7-8e5d-83154809d967") : object "kube-system"/"coredns" not registered
	Aug 02 18:30:08 test-preload-999194 kubelet[1092]: E0802 18:30:08.525961    1092 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-tjqpt" podUID=f0b3dd02-2d58-42a7-8e5d-83154809d967
	
	
	==> storage-provisioner [fa9b8859436f511c882695be45eff295bc7738b573491d167858731d194ef3f6] <==
	I0802 18:30:05.908418       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-999194 -n test-preload-999194
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-999194 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-999194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-999194
--- FAIL: TestPreload (192.99s)

                                                
                                    
x
+
TestKubernetesUpgrade (729.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-132946 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-132946 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m17.095777533s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-132946] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-132946" primary control-plane node in "kubernetes-upgrade-132946" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:32:56.586342   48425 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:32:56.586591   48425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:32:56.586601   48425 out.go:304] Setting ErrFile to fd 2...
	I0802 18:32:56.586607   48425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:32:56.586788   48425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:32:56.587408   48425 out.go:298] Setting JSON to false
	I0802 18:32:56.588259   48425 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4521,"bootTime":1722619056,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:32:56.588321   48425 start.go:139] virtualization: kvm guest
	I0802 18:32:56.630071   48425 out.go:177] * [kubernetes-upgrade-132946] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:32:56.692631   48425 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:32:56.692672   48425 notify.go:220] Checking for updates...
	I0802 18:32:56.839593   48425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:32:56.902251   48425 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:32:56.975019   48425 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:32:57.036407   48425 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:32:57.100168   48425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:32:57.172518   48425 config.go:182] Loaded profile config "NoKubernetes-891799": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:32:57.172632   48425 config.go:182] Loaded profile config "offline-crio-872961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:32:57.172700   48425 config.go:182] Loaded profile config "running-upgrade-079131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0802 18:32:57.172781   48425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:32:57.297822   48425 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 18:32:57.317510   48425 start.go:297] selected driver: kvm2
	I0802 18:32:57.317540   48425 start.go:901] validating driver "kvm2" against <nil>
	I0802 18:32:57.317552   48425 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:32:57.318340   48425 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:32:57.318468   48425 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:32:57.334695   48425 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:32:57.334783   48425 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 18:32:57.335095   48425 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 18:32:57.335198   48425 cni.go:84] Creating CNI manager for ""
	I0802 18:32:57.335218   48425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:32:57.335230   48425 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 18:32:57.335330   48425 start.go:340] cluster config:
	{Name:kubernetes-upgrade-132946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-132946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:32:57.335464   48425 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:32:57.407843   48425 out.go:177] * Starting "kubernetes-upgrade-132946" primary control-plane node in "kubernetes-upgrade-132946" cluster
	I0802 18:32:57.409404   48425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0802 18:32:57.409447   48425 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0802 18:32:57.409466   48425 cache.go:56] Caching tarball of preloaded images
	I0802 18:32:57.409542   48425 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:32:57.409552   48425 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0802 18:32:57.409644   48425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/config.json ...
	I0802 18:32:57.409660   48425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/config.json: {Name:mk03cc3f46b6f023dd2a50488bf58ab5ee192d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:32:57.409778   48425 start.go:360] acquireMachinesLock for kubernetes-upgrade-132946: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:33:44.423578   48425 start.go:364] duration metric: took 47.013774303s to acquireMachinesLock for "kubernetes-upgrade-132946"
	I0802 18:33:44.423646   48425 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-132946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-132946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 18:33:44.423765   48425 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 18:33:44.426028   48425 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 18:33:44.426217   48425 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:33:44.426273   48425 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:33:44.443017   48425 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39115
	I0802 18:33:44.443565   48425 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:33:44.444247   48425 main.go:141] libmachine: Using API Version  1
	I0802 18:33:44.444271   48425 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:33:44.444701   48425 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:33:44.444928   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetMachineName
	I0802 18:33:44.445109   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .DriverName
	I0802 18:33:44.445297   48425 start.go:159] libmachine.API.Create for "kubernetes-upgrade-132946" (driver="kvm2")
	I0802 18:33:44.445331   48425 client.go:168] LocalClient.Create starting
	I0802 18:33:44.445371   48425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 18:33:44.445412   48425 main.go:141] libmachine: Decoding PEM data...
	I0802 18:33:44.445435   48425 main.go:141] libmachine: Parsing certificate...
	I0802 18:33:44.445504   48425 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 18:33:44.445527   48425 main.go:141] libmachine: Decoding PEM data...
	I0802 18:33:44.445553   48425 main.go:141] libmachine: Parsing certificate...
	I0802 18:33:44.445576   48425 main.go:141] libmachine: Running pre-create checks...
	I0802 18:33:44.445601   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .PreCreateCheck
	I0802 18:33:44.446136   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetConfigRaw
	I0802 18:33:44.446603   48425 main.go:141] libmachine: Creating machine...
	I0802 18:33:44.446620   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .Create
	I0802 18:33:44.446770   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Creating KVM machine...
	I0802 18:33:44.447933   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found existing default KVM network
	I0802 18:33:44.449523   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:44.449362   49096 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:73:73:7e} reservation:<nil>}
	I0802 18:33:44.450314   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:44.450220   49096 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:1a:be:fd} reservation:<nil>}
	I0802 18:33:44.451147   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:44.451045   49096 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:a9:bd:62} reservation:<nil>}
	I0802 18:33:44.452194   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:44.452104   49096 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028ba90}
	I0802 18:33:44.452215   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | created network xml: 
	I0802 18:33:44.452227   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | <network>
	I0802 18:33:44.452236   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG |   <name>mk-kubernetes-upgrade-132946</name>
	I0802 18:33:44.452247   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG |   <dns enable='no'/>
	I0802 18:33:44.452254   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG |   
	I0802 18:33:44.452266   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0802 18:33:44.452274   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG |     <dhcp>
	I0802 18:33:44.452285   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0802 18:33:44.452297   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG |     </dhcp>
	I0802 18:33:44.452306   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG |   </ip>
	I0802 18:33:44.452313   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG |   
	I0802 18:33:44.452330   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | </network>
	I0802 18:33:44.452341   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | 
	I0802 18:33:44.458399   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | trying to create private KVM network mk-kubernetes-upgrade-132946 192.168.72.0/24...
	I0802 18:33:44.530062   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | private KVM network mk-kubernetes-upgrade-132946 192.168.72.0/24 created
	I0802 18:33:44.530120   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:44.530043   49096 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:33:44.530135   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946 ...
	I0802 18:33:44.530160   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 18:33:44.530179   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 18:33:44.787945   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:44.787825   49096 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/id_rsa...
	I0802 18:33:45.006904   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:45.006739   49096 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/kubernetes-upgrade-132946.rawdisk...
	I0802 18:33:45.006941   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Writing magic tar header
	I0802 18:33:45.006961   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Writing SSH key tar header
	I0802 18:33:45.006974   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:45.006915   49096 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946 ...
	I0802 18:33:45.007074   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946
	I0802 18:33:45.007117   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 18:33:45.007135   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946 (perms=drwx------)
	I0802 18:33:45.007151   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 18:33:45.007161   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 18:33:45.007176   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:33:45.007191   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 18:33:45.007230   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 18:33:45.007243   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 18:33:45.007254   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Checking permissions on dir: /home/jenkins
	I0802 18:33:45.007264   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 18:33:45.007272   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Checking permissions on dir: /home
	I0802 18:33:45.007288   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Skipping /home - not owner
	I0802 18:33:45.007303   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 18:33:45.007313   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Creating domain...
	I0802 18:33:45.008449   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) define libvirt domain using xml: 
	I0802 18:33:45.008472   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) <domain type='kvm'>
	I0802 18:33:45.008479   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   <name>kubernetes-upgrade-132946</name>
	I0802 18:33:45.008485   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   <memory unit='MiB'>2200</memory>
	I0802 18:33:45.008491   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   <vcpu>2</vcpu>
	I0802 18:33:45.008495   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   <features>
	I0802 18:33:45.008512   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <acpi/>
	I0802 18:33:45.008523   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <apic/>
	I0802 18:33:45.008532   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <pae/>
	I0802 18:33:45.008542   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     
	I0802 18:33:45.008554   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   </features>
	I0802 18:33:45.008565   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   <cpu mode='host-passthrough'>
	I0802 18:33:45.008576   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   
	I0802 18:33:45.008583   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   </cpu>
	I0802 18:33:45.008589   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   <os>
	I0802 18:33:45.008594   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <type>hvm</type>
	I0802 18:33:45.008602   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <boot dev='cdrom'/>
	I0802 18:33:45.008613   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <boot dev='hd'/>
	I0802 18:33:45.008626   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <bootmenu enable='no'/>
	I0802 18:33:45.008636   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   </os>
	I0802 18:33:45.008661   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   <devices>
	I0802 18:33:45.008681   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <disk type='file' device='cdrom'>
	I0802 18:33:45.008697   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/boot2docker.iso'/>
	I0802 18:33:45.008714   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <target dev='hdc' bus='scsi'/>
	I0802 18:33:45.008728   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <readonly/>
	I0802 18:33:45.008739   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     </disk>
	I0802 18:33:45.008751   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <disk type='file' device='disk'>
	I0802 18:33:45.008765   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 18:33:45.008797   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/kubernetes-upgrade-132946.rawdisk'/>
	I0802 18:33:45.008821   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <target dev='hda' bus='virtio'/>
	I0802 18:33:45.008829   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     </disk>
	I0802 18:33:45.008839   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <interface type='network'>
	I0802 18:33:45.008849   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <source network='mk-kubernetes-upgrade-132946'/>
	I0802 18:33:45.008857   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <model type='virtio'/>
	I0802 18:33:45.008866   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     </interface>
	I0802 18:33:45.008875   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <interface type='network'>
	I0802 18:33:45.008884   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <source network='default'/>
	I0802 18:33:45.008892   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <model type='virtio'/>
	I0802 18:33:45.008906   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     </interface>
	I0802 18:33:45.008918   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <serial type='pty'>
	I0802 18:33:45.008927   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <target port='0'/>
	I0802 18:33:45.008934   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     </serial>
	I0802 18:33:45.008942   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <console type='pty'>
	I0802 18:33:45.008953   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <target type='serial' port='0'/>
	I0802 18:33:45.008965   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     </console>
	I0802 18:33:45.008975   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     <rng model='virtio'>
	I0802 18:33:45.008986   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)       <backend model='random'>/dev/random</backend>
	I0802 18:33:45.008999   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     </rng>
	I0802 18:33:45.009010   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     
	I0802 18:33:45.009028   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)     
	I0802 18:33:45.009036   48425 main.go:141] libmachine: (kubernetes-upgrade-132946)   </devices>
	I0802 18:33:45.009040   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) </domain>
	I0802 18:33:45.009048   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) 
	I0802 18:33:45.013374   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:58:af:1f in network default
	I0802 18:33:45.013904   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Ensuring networks are active...
	I0802 18:33:45.013922   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:45.014643   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Ensuring network default is active
	I0802 18:33:45.015005   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Ensuring network mk-kubernetes-upgrade-132946 is active
	I0802 18:33:45.015723   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Getting domain xml...
	I0802 18:33:45.016512   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Creating domain...
	I0802 18:33:46.244531   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Waiting to get IP...
	I0802 18:33:46.245441   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:46.245890   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:46.245932   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:46.245885   49096 retry.go:31] will retry after 238.516821ms: waiting for machine to come up
	I0802 18:33:46.486352   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:46.486978   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:46.487011   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:46.486927   49096 retry.go:31] will retry after 387.4538ms: waiting for machine to come up
	I0802 18:33:46.876612   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:46.877126   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:46.877155   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:46.877070   49096 retry.go:31] will retry after 319.518814ms: waiting for machine to come up
	I0802 18:33:47.198528   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:47.198915   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:47.198932   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:47.198873   49096 retry.go:31] will retry after 445.127413ms: waiting for machine to come up
	I0802 18:33:47.645451   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:47.645894   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:47.645927   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:47.645848   49096 retry.go:31] will retry after 737.863024ms: waiting for machine to come up
	I0802 18:33:48.385998   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:48.386568   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:48.386599   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:48.386489   49096 retry.go:31] will retry after 584.360578ms: waiting for machine to come up
	I0802 18:33:48.972311   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:48.972793   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:48.972823   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:48.972754   49096 retry.go:31] will retry after 768.199806ms: waiting for machine to come up
	I0802 18:33:49.743214   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:49.744032   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:49.744165   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:49.744110   49096 retry.go:31] will retry after 1.420314307s: waiting for machine to come up
	I0802 18:33:51.165939   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:51.166536   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:51.166569   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:51.166465   49096 retry.go:31] will retry after 1.799116906s: waiting for machine to come up
	I0802 18:33:52.968043   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:52.968614   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:52.968652   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:52.968544   49096 retry.go:31] will retry after 1.783020491s: waiting for machine to come up
	I0802 18:33:54.752726   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:54.753513   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:54.753541   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:54.753467   49096 retry.go:31] will retry after 2.494792501s: waiting for machine to come up
	I0802 18:33:57.250081   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:33:57.250638   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:33:57.250657   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:33:57.250545   49096 retry.go:31] will retry after 3.316245306s: waiting for machine to come up
	I0802 18:34:00.568156   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:00.568671   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:34:00.568692   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:34:00.568634   49096 retry.go:31] will retry after 3.28417356s: waiting for machine to come up
	I0802 18:34:03.856129   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:03.856610   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find current IP address of domain kubernetes-upgrade-132946 in network mk-kubernetes-upgrade-132946
	I0802 18:34:03.856640   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | I0802 18:34:03.856552   49096 retry.go:31] will retry after 3.970231545s: waiting for machine to come up
	I0802 18:34:07.829053   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:07.829635   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Found IP for machine: 192.168.72.113
	I0802 18:34:07.829662   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Reserving static IP address...
	I0802 18:34:07.829678   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has current primary IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:07.830015   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-132946", mac: "52:54:00:af:a0:7e", ip: "192.168.72.113"} in network mk-kubernetes-upgrade-132946
	I0802 18:34:07.909699   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Getting to WaitForSSH function...
	I0802 18:34:07.909735   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Reserved static IP address: 192.168.72.113
	I0802 18:34:07.909751   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Waiting for SSH to be available...
	I0802 18:34:07.913084   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:07.913561   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:minikube Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:07.913595   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:07.913787   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Using SSH client type: external
	I0802 18:34:07.913828   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/id_rsa (-rw-------)
	I0802 18:34:07.913869   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.113 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 18:34:07.913879   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | About to run SSH command:
	I0802 18:34:07.913901   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | exit 0
	I0802 18:34:08.051427   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | SSH cmd err, output: <nil>: 
	I0802 18:34:08.051710   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) KVM machine creation complete!
	I0802 18:34:08.052118   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetConfigRaw
	I0802 18:34:08.052685   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .DriverName
	I0802 18:34:08.052913   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .DriverName
	I0802 18:34:08.053093   48425 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 18:34:08.053111   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetState
	I0802 18:34:08.054644   48425 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 18:34:08.054662   48425 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 18:34:08.054670   48425 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 18:34:08.054679   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:34:08.057275   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.059261   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:08.059287   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.059462   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:34:08.059676   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:08.059834   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:08.059991   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:34:08.060186   48425 main.go:141] libmachine: Using SSH client type: native
	I0802 18:34:08.060440   48425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0802 18:34:08.060456   48425 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 18:34:08.182669   48425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:34:08.182696   48425 main.go:141] libmachine: Detecting the provisioner...
	I0802 18:34:08.182707   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:34:08.185953   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.186416   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:08.186469   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.186636   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:34:08.186859   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:08.187049   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:08.187208   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:34:08.187399   48425 main.go:141] libmachine: Using SSH client type: native
	I0802 18:34:08.187556   48425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0802 18:34:08.187570   48425 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 18:34:08.309259   48425 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 18:34:08.309385   48425 main.go:141] libmachine: found compatible host: buildroot
	I0802 18:34:08.309408   48425 main.go:141] libmachine: Provisioning with buildroot...
	I0802 18:34:08.309419   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetMachineName
	I0802 18:34:08.309703   48425 buildroot.go:166] provisioning hostname "kubernetes-upgrade-132946"
	I0802 18:34:08.309730   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetMachineName
	I0802 18:34:08.309929   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:34:08.313275   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.313662   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:08.313705   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.313966   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:34:08.314175   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:08.314391   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:08.314563   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:34:08.314812   48425 main.go:141] libmachine: Using SSH client type: native
	I0802 18:34:08.315050   48425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0802 18:34:08.315074   48425 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-132946 && echo "kubernetes-upgrade-132946" | sudo tee /etc/hostname
	I0802 18:34:08.456811   48425 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-132946
	
	I0802 18:34:08.456850   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:34:08.460274   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.460742   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:08.460775   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.461068   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:34:08.461252   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:08.461409   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:08.461605   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:34:08.461811   48425 main.go:141] libmachine: Using SSH client type: native
	I0802 18:34:08.461997   48425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0802 18:34:08.462021   48425 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-132946' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-132946/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-132946' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:34:08.592573   48425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:34:08.592611   48425 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:34:08.592646   48425 buildroot.go:174] setting up certificates
	I0802 18:34:08.592658   48425 provision.go:84] configureAuth start
	I0802 18:34:08.592685   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetMachineName
	I0802 18:34:08.592993   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetIP
	I0802 18:34:08.596069   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.596456   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:08.596487   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.596699   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:34:08.599480   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.599891   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:08.599921   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.600086   48425 provision.go:143] copyHostCerts
	I0802 18:34:08.600151   48425 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:34:08.600164   48425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:34:08.600231   48425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:34:08.600410   48425 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:34:08.600424   48425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:34:08.600464   48425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:34:08.600570   48425 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:34:08.600582   48425 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:34:08.600648   48425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:34:08.600751   48425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-132946 san=[127.0.0.1 192.168.72.113 kubernetes-upgrade-132946 localhost minikube]
	I0802 18:34:08.884843   48425 provision.go:177] copyRemoteCerts
	I0802 18:34:08.884909   48425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:34:08.884939   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:34:08.888044   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.888405   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:08.888438   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:08.888595   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:34:08.888800   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:08.888970   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:34:08.889141   48425 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/id_rsa Username:docker}
	I0802 18:34:08.977514   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:34:09.007364   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0802 18:34:09.038639   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:34:09.072770   48425 provision.go:87] duration metric: took 480.095501ms to configureAuth
	I0802 18:34:09.072798   48425 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:34:09.072952   48425 config.go:182] Loaded profile config "kubernetes-upgrade-132946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0802 18:34:09.073027   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:34:09.075828   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.076178   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:09.076204   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.076363   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:34:09.076585   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:09.076759   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:09.076927   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:34:09.077066   48425 main.go:141] libmachine: Using SSH client type: native
	I0802 18:34:09.077238   48425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0802 18:34:09.077255   48425 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:34:09.371512   48425 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:34:09.371549   48425 main.go:141] libmachine: Checking connection to Docker...
	I0802 18:34:09.371559   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetURL
	I0802 18:34:09.373208   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Using libvirt version 6000000
	I0802 18:34:09.375676   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.376020   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:09.376054   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.376186   48425 main.go:141] libmachine: Docker is up and running!
	I0802 18:34:09.376207   48425 main.go:141] libmachine: Reticulating splines...
	I0802 18:34:09.376215   48425 client.go:171] duration metric: took 24.930874497s to LocalClient.Create
	I0802 18:34:09.376240   48425 start.go:167] duration metric: took 24.930945237s to libmachine.API.Create "kubernetes-upgrade-132946"
	I0802 18:34:09.376252   48425 start.go:293] postStartSetup for "kubernetes-upgrade-132946" (driver="kvm2")
	I0802 18:34:09.376264   48425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:34:09.376286   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .DriverName
	I0802 18:34:09.376508   48425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:34:09.376533   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:34:09.378882   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.379277   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:09.379302   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.379485   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:34:09.379682   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:09.379859   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:34:09.380009   48425 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/id_rsa Username:docker}
	I0802 18:34:09.470904   48425 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:34:09.475751   48425 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:34:09.475777   48425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:34:09.475859   48425 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:34:09.475975   48425 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:34:09.476099   48425 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:34:09.487356   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:34:09.510899   48425 start.go:296] duration metric: took 134.632308ms for postStartSetup
	I0802 18:34:09.510973   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetConfigRaw
	I0802 18:34:09.511589   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetIP
	I0802 18:34:09.514467   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.514913   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:09.514950   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.515252   48425 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/config.json ...
	I0802 18:34:09.515495   48425 start.go:128] duration metric: took 25.091710884s to createHost
	I0802 18:34:09.515528   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:34:09.517706   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.518035   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:09.518076   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.518184   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:34:09.518364   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:09.518543   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:09.518716   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:34:09.518896   48425 main.go:141] libmachine: Using SSH client type: native
	I0802 18:34:09.519121   48425 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.113 22 <nil> <nil>}
	I0802 18:34:09.519135   48425 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0802 18:34:09.644375   48425 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722623649.604728816
	
	I0802 18:34:09.644400   48425 fix.go:216] guest clock: 1722623649.604728816
	I0802 18:34:09.644411   48425 fix.go:229] Guest: 2024-08-02 18:34:09.604728816 +0000 UTC Remote: 2024-08-02 18:34:09.515512327 +0000 UTC m=+72.964255773 (delta=89.216489ms)
	I0802 18:34:09.644457   48425 fix.go:200] guest clock delta is within tolerance: 89.216489ms
	I0802 18:34:09.644465   48425 start.go:83] releasing machines lock for "kubernetes-upgrade-132946", held for 25.220852271s
	I0802 18:34:09.644494   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .DriverName
	I0802 18:34:09.644769   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetIP
	I0802 18:34:09.648137   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.648629   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:09.648658   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.648890   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .DriverName
	I0802 18:34:09.649405   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .DriverName
	I0802 18:34:09.649586   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .DriverName
	I0802 18:34:09.649664   48425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:34:09.649705   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:34:09.649786   48425 ssh_runner.go:195] Run: cat /version.json
	I0802 18:34:09.649811   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:34:09.652768   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.653386   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:09.653416   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.653506   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.653695   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:34:09.653861   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:09.653949   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:09.654030   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:09.654051   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:34:09.654211   48425 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/id_rsa Username:docker}
	I0802 18:34:09.654399   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:34:09.654550   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:34:09.654709   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:34:09.654840   48425 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/id_rsa Username:docker}
	I0802 18:34:09.781762   48425 ssh_runner.go:195] Run: systemctl --version
	I0802 18:34:09.789281   48425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:34:09.963732   48425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:34:09.970692   48425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:34:09.970767   48425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:34:09.991688   48425 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 18:34:09.991709   48425 start.go:495] detecting cgroup driver to use...
	I0802 18:34:09.991772   48425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:34:10.012800   48425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:34:10.031255   48425 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:34:10.031309   48425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:34:10.048593   48425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:34:10.066332   48425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:34:10.203369   48425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:34:10.407455   48425 docker.go:233] disabling docker service ...
	I0802 18:34:10.407523   48425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:34:10.430958   48425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:34:10.447073   48425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:34:10.596576   48425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:34:10.725608   48425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:34:10.743915   48425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:34:10.771581   48425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0802 18:34:10.771649   48425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:34:10.782980   48425 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:34:10.783054   48425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:34:10.794426   48425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:34:10.805426   48425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:34:10.816123   48425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:34:10.831285   48425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:34:10.844981   48425 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 18:34:10.845045   48425 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 18:34:10.862123   48425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:34:10.872065   48425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:34:11.016696   48425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:34:11.179162   48425 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:34:11.179232   48425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:34:11.183736   48425 start.go:563] Will wait 60s for crictl version
	I0802 18:34:11.183783   48425 ssh_runner.go:195] Run: which crictl
	I0802 18:34:11.188027   48425 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:34:11.230094   48425 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:34:11.230158   48425 ssh_runner.go:195] Run: crio --version
	I0802 18:34:11.258067   48425 ssh_runner.go:195] Run: crio --version
	I0802 18:34:11.287310   48425 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0802 18:34:11.288661   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetIP
	I0802 18:34:11.291973   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:11.292448   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:33:59 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:34:11.292484   48425 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:34:11.292727   48425 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0802 18:34:11.297982   48425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:34:11.312422   48425 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-132946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-132946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:34:11.312556   48425 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0802 18:34:11.312609   48425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:34:11.350666   48425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0802 18:34:11.350738   48425 ssh_runner.go:195] Run: which lz4
	I0802 18:34:11.354521   48425 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0802 18:34:11.358639   48425 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 18:34:11.358671   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0802 18:34:12.788477   48425 crio.go:462] duration metric: took 1.433982833s to copy over tarball
	I0802 18:34:12.788546   48425 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 18:34:15.254579   48425 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.465999753s)
	I0802 18:34:15.254613   48425 crio.go:469] duration metric: took 2.466106257s to extract the tarball
	I0802 18:34:15.254622   48425 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 18:34:15.296929   48425 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:34:15.340592   48425 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0802 18:34:15.340616   48425 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0802 18:34:15.340656   48425 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:34:15.340702   48425 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:34:15.340726   48425 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:34:15.340737   48425 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:34:15.340742   48425 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:34:15.340823   48425 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0802 18:34:15.340828   48425 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0802 18:34:15.340704   48425 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:34:15.342454   48425 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:34:15.342474   48425 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0802 18:34:15.342475   48425 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:34:15.342605   48425 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0802 18:34:15.342481   48425 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:34:15.342684   48425 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:34:15.342980   48425 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:34:15.343043   48425 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:34:15.555342   48425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0802 18:34:15.592704   48425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:34:15.598884   48425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:34:15.598969   48425 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0802 18:34:15.599006   48425 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0802 18:34:15.599041   48425 ssh_runner.go:195] Run: which crictl
	I0802 18:34:15.617742   48425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:34:15.618766   48425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0802 18:34:15.619123   48425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0802 18:34:15.623439   48425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:34:15.659506   48425 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0802 18:34:15.659547   48425 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:34:15.659605   48425 ssh_runner.go:195] Run: which crictl
	I0802 18:34:15.684022   48425 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0802 18:34:15.684069   48425 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:34:15.684072   48425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0802 18:34:15.684101   48425 ssh_runner.go:195] Run: which crictl
	I0802 18:34:15.764213   48425 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0802 18:34:15.764293   48425 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0802 18:34:15.764339   48425 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:34:15.764375   48425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:34:15.764410   48425 ssh_runner.go:195] Run: which crictl
	I0802 18:34:15.764309   48425 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:34:15.764462   48425 ssh_runner.go:195] Run: which crictl
	I0802 18:34:15.764236   48425 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0802 18:34:15.764543   48425 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0802 18:34:15.764573   48425 ssh_runner.go:195] Run: which crictl
	I0802 18:34:15.764216   48425 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0802 18:34:15.764612   48425 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:34:15.764632   48425 ssh_runner.go:195] Run: which crictl
	I0802 18:34:15.768951   48425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:34:15.769040   48425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0802 18:34:15.773066   48425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0802 18:34:15.780672   48425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:34:15.861932   48425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0802 18:34:15.862014   48425 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:34:15.862029   48425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0802 18:34:15.862055   48425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0802 18:34:15.862159   48425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0802 18:34:15.864540   48425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0802 18:34:15.908568   48425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0802 18:34:15.908602   48425 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0802 18:34:16.253255   48425 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:34:16.393914   48425 cache_images.go:92] duration metric: took 1.053282573s to LoadCachedImages
	W0802 18:34:16.393984   48425 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0802 18:34:16.393997   48425 kubeadm.go:934] updating node { 192.168.72.113 8443 v1.20.0 crio true true} ...
	I0802 18:34:16.394116   48425 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-132946 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.113
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-132946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:34:16.394207   48425 ssh_runner.go:195] Run: crio config
	I0802 18:34:16.440518   48425 cni.go:84] Creating CNI manager for ""
	I0802 18:34:16.440554   48425 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:34:16.440569   48425 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:34:16.440587   48425 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.113 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-132946 NodeName:kubernetes-upgrade-132946 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.113"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.113 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0802 18:34:16.440708   48425 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.113
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-132946"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.113
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.113"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:34:16.440778   48425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0802 18:34:16.450150   48425 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:34:16.450225   48425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:34:16.461871   48425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0802 18:34:16.479380   48425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:34:16.496621   48425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0802 18:34:16.513003   48425 ssh_runner.go:195] Run: grep 192.168.72.113	control-plane.minikube.internal$ /etc/hosts
	I0802 18:34:16.516815   48425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.113	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:34:16.528625   48425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:34:16.661339   48425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:34:16.682860   48425 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946 for IP: 192.168.72.113
	I0802 18:34:16.682887   48425 certs.go:194] generating shared ca certs ...
	I0802 18:34:16.682909   48425 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:34:16.683097   48425 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:34:16.683183   48425 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:34:16.683198   48425 certs.go:256] generating profile certs ...
	I0802 18:34:16.683267   48425 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/client.key
	I0802 18:34:16.683302   48425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/client.crt with IP's: []
	I0802 18:34:16.935950   48425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/client.crt ...
	I0802 18:34:16.935995   48425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/client.crt: {Name:mkaea15168a7caae05ceda4d9e6f0af9194e108f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:34:16.936184   48425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/client.key ...
	I0802 18:34:16.936202   48425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/client.key: {Name:mk3d922bee9b656cab0b2d00cb0d4981347f6331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:34:16.936313   48425 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.key.80f80565
	I0802 18:34:16.936335   48425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.crt.80f80565 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.113]
	I0802 18:34:17.023643   48425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.crt.80f80565 ...
	I0802 18:34:17.023670   48425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.crt.80f80565: {Name:mkc99d940bfdcbf28c75dda17d3d927ed75dfc76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:34:17.038574   48425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.key.80f80565 ...
	I0802 18:34:17.038613   48425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.key.80f80565: {Name:mk86a09dd59826b3114b15799880195bb27395dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:34:17.038747   48425 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.crt.80f80565 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.crt
	I0802 18:34:17.038843   48425 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.key.80f80565 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.key
	I0802 18:34:17.038925   48425 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/proxy-client.key
	I0802 18:34:17.038949   48425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/proxy-client.crt with IP's: []
	I0802 18:34:17.119844   48425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/proxy-client.crt ...
	I0802 18:34:17.119875   48425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/proxy-client.crt: {Name:mkd7d95e1547f2b39ec11a4d523ad151845e0f9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:34:17.120047   48425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/proxy-client.key ...
	I0802 18:34:17.120063   48425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/proxy-client.key: {Name:mkc699b600e34e892dbe8f227d3e245c62584906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:34:17.120256   48425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:34:17.120314   48425 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:34:17.120328   48425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:34:17.120367   48425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:34:17.120433   48425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:34:17.120468   48425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:34:17.120524   48425 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:34:17.121067   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:34:17.145422   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:34:17.168603   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:34:17.193996   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:34:17.219566   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0802 18:34:17.243185   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 18:34:17.265798   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:34:17.288594   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 18:34:17.311859   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:34:17.334806   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:34:17.358814   48425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:34:17.381023   48425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:34:17.396219   48425 ssh_runner.go:195] Run: openssl version
	I0802 18:34:17.401706   48425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:34:17.411596   48425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:34:17.415542   48425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:34:17.415590   48425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:34:17.421343   48425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:34:17.431279   48425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:34:17.441255   48425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:34:17.446017   48425 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:34:17.446084   48425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:34:17.451629   48425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:34:17.462043   48425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:34:17.472409   48425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:34:17.477638   48425 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:34:17.477688   48425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:34:17.484621   48425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:34:17.496251   48425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:34:17.500849   48425 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 18:34:17.500910   48425 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-132946 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-132946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:34:17.501002   48425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:34:17.501077   48425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:34:17.545869   48425 cri.go:89] found id: ""
	I0802 18:34:17.545933   48425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 18:34:17.561293   48425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 18:34:17.571132   48425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:34:17.584134   48425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:34:17.584163   48425 kubeadm.go:157] found existing configuration files:
	
	I0802 18:34:17.584215   48425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:34:17.593667   48425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:34:17.593749   48425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:34:17.605084   48425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:34:17.618509   48425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:34:17.618588   48425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:34:17.630519   48425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:34:17.645596   48425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:34:17.645663   48425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:34:17.662799   48425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:34:17.671732   48425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:34:17.671791   48425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:34:17.680838   48425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 18:34:17.796089   48425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0802 18:34:17.796196   48425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:34:17.924918   48425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:34:17.925019   48425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:34:17.925098   48425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:34:18.100631   48425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:34:18.294335   48425 out.go:204]   - Generating certificates and keys ...
	I0802 18:34:18.294474   48425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:34:18.294584   48425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:34:18.294690   48425 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 18:34:18.294775   48425 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 18:34:18.466268   48425 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 18:34:18.736767   48425 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 18:34:18.886986   48425 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 18:34:18.887204   48425 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-132946 localhost] and IPs [192.168.72.113 127.0.0.1 ::1]
	I0802 18:34:19.032110   48425 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 18:34:19.032346   48425 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-132946 localhost] and IPs [192.168.72.113 127.0.0.1 ::1]
	I0802 18:34:19.223122   48425 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 18:34:19.523509   48425 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 18:34:19.628275   48425 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 18:34:19.630251   48425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:34:19.833800   48425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:34:19.944554   48425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:34:20.026810   48425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:34:20.095769   48425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:34:20.111068   48425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:34:20.112063   48425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:34:20.112127   48425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:34:20.250776   48425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:34:20.254034   48425 out.go:204]   - Booting up control plane ...
	I0802 18:34:20.254185   48425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:34:20.258107   48425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:34:20.259130   48425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:34:20.260730   48425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:34:20.268975   48425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 18:35:00.238401   48425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:35:00.239191   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:35:00.239390   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:35:05.238409   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:35:05.238841   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:35:15.237631   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:35:15.237879   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:35:35.238258   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:35:35.238435   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:36:15.237354   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:36:15.237644   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:36:15.237667   48425 kubeadm.go:310] 
	I0802 18:36:15.237721   48425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0802 18:36:15.237781   48425 kubeadm.go:310] 		timed out waiting for the condition
	I0802 18:36:15.237811   48425 kubeadm.go:310] 
	I0802 18:36:15.237901   48425 kubeadm.go:310] 	This error is likely caused by:
	I0802 18:36:15.237977   48425 kubeadm.go:310] 		- The kubelet is not running
	I0802 18:36:15.238118   48425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0802 18:36:15.238127   48425 kubeadm.go:310] 
	I0802 18:36:15.238271   48425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0802 18:36:15.238324   48425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0802 18:36:15.238382   48425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0802 18:36:15.238395   48425 kubeadm.go:310] 
	I0802 18:36:15.238578   48425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0802 18:36:15.238689   48425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0802 18:36:15.238698   48425 kubeadm.go:310] 
	I0802 18:36:15.238807   48425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0802 18:36:15.238920   48425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0802 18:36:15.239018   48425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0802 18:36:15.239129   48425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0802 18:36:15.239141   48425 kubeadm.go:310] 
	I0802 18:36:15.239878   48425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 18:36:15.239972   48425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0802 18:36:15.240046   48425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0802 18:36:15.240223   48425 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-132946 localhost] and IPs [192.168.72.113 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-132946 localhost] and IPs [192.168.72.113 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-132946 localhost] and IPs [192.168.72.113 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-132946 localhost] and IPs [192.168.72.113 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0802 18:36:15.240268   48425 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0802 18:36:15.973697   48425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:36:15.987797   48425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:36:15.997010   48425 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:36:15.997026   48425 kubeadm.go:157] found existing configuration files:
	
	I0802 18:36:15.997067   48425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:36:16.005825   48425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:36:16.005879   48425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:36:16.015450   48425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:36:16.023921   48425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:36:16.023967   48425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:36:16.032906   48425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:36:16.041371   48425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:36:16.041423   48425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:36:16.051268   48425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:36:16.061058   48425 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:36:16.061110   48425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:36:16.070850   48425 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 18:36:16.150565   48425 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0802 18:36:16.150687   48425 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:36:16.309825   48425 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:36:16.309944   48425 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:36:16.310068   48425 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:36:16.504492   48425 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:36:16.506378   48425 out.go:204]   - Generating certificates and keys ...
	I0802 18:36:16.506494   48425 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:36:16.506580   48425 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:36:16.506681   48425 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 18:36:16.506780   48425 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 18:36:16.506893   48425 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 18:36:16.506980   48425 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 18:36:16.507064   48425 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 18:36:16.507300   48425 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 18:36:16.507826   48425 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 18:36:16.508182   48425 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 18:36:16.508238   48425 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 18:36:16.508291   48425 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:36:16.961322   48425 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:36:17.099304   48425 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:36:17.615248   48425 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:36:17.809613   48425 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:36:17.823917   48425 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:36:17.825863   48425 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:36:17.825941   48425 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:36:17.953638   48425 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:36:17.955360   48425 out.go:204]   - Booting up control plane ...
	I0802 18:36:17.955476   48425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:36:17.968682   48425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:36:17.970032   48425 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:36:17.971133   48425 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:36:17.973741   48425 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 18:36:57.976373   48425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:36:57.976489   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:36:57.976733   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:37:02.977213   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:37:02.977527   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:37:12.978153   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:37:12.978450   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:37:32.980044   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:37:32.980369   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:38:12.979262   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:38:12.979518   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:38:12.979532   48425 kubeadm.go:310] 
	I0802 18:38:12.979614   48425 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0802 18:38:12.979730   48425 kubeadm.go:310] 		timed out waiting for the condition
	I0802 18:38:12.979760   48425 kubeadm.go:310] 
	I0802 18:38:12.979816   48425 kubeadm.go:310] 	This error is likely caused by:
	I0802 18:38:12.979879   48425 kubeadm.go:310] 		- The kubelet is not running
	I0802 18:38:12.980014   48425 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0802 18:38:12.980026   48425 kubeadm.go:310] 
	I0802 18:38:12.980198   48425 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0802 18:38:12.980249   48425 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0802 18:38:12.980307   48425 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0802 18:38:12.980324   48425 kubeadm.go:310] 
	I0802 18:38:12.980452   48425 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0802 18:38:12.980558   48425 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0802 18:38:12.980568   48425 kubeadm.go:310] 
	I0802 18:38:12.980703   48425 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0802 18:38:12.980813   48425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0802 18:38:12.980906   48425 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0802 18:38:12.980995   48425 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0802 18:38:12.981005   48425 kubeadm.go:310] 
	I0802 18:38:12.981651   48425 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 18:38:12.981727   48425 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0802 18:38:12.981783   48425 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0802 18:38:12.981879   48425 kubeadm.go:394] duration metric: took 3m55.480973817s to StartCluster
	I0802 18:38:12.981923   48425 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:38:12.981996   48425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:38:13.025792   48425 cri.go:89] found id: ""
	I0802 18:38:13.025822   48425 logs.go:276] 0 containers: []
	W0802 18:38:13.025833   48425 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:38:13.025840   48425 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:38:13.025909   48425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:38:13.059728   48425 cri.go:89] found id: ""
	I0802 18:38:13.059758   48425 logs.go:276] 0 containers: []
	W0802 18:38:13.059769   48425 logs.go:278] No container was found matching "etcd"
	I0802 18:38:13.059777   48425 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:38:13.059836   48425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:38:13.096804   48425 cri.go:89] found id: ""
	I0802 18:38:13.096832   48425 logs.go:276] 0 containers: []
	W0802 18:38:13.096842   48425 logs.go:278] No container was found matching "coredns"
	I0802 18:38:13.096851   48425 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:38:13.096913   48425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:38:13.134397   48425 cri.go:89] found id: ""
	I0802 18:38:13.134429   48425 logs.go:276] 0 containers: []
	W0802 18:38:13.134442   48425 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:38:13.134451   48425 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:38:13.134545   48425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:38:13.172762   48425 cri.go:89] found id: ""
	I0802 18:38:13.172789   48425 logs.go:276] 0 containers: []
	W0802 18:38:13.172806   48425 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:38:13.172813   48425 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:38:13.172877   48425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:38:13.210995   48425 cri.go:89] found id: ""
	I0802 18:38:13.211031   48425 logs.go:276] 0 containers: []
	W0802 18:38:13.211044   48425 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:38:13.211053   48425 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:38:13.211138   48425 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:38:13.248402   48425 cri.go:89] found id: ""
	I0802 18:38:13.248429   48425 logs.go:276] 0 containers: []
	W0802 18:38:13.248437   48425 logs.go:278] No container was found matching "kindnet"
	I0802 18:38:13.248446   48425 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:38:13.248461   48425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:38:13.373984   48425 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:38:13.374007   48425 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:38:13.374027   48425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:38:13.489154   48425 logs.go:123] Gathering logs for container status ...
	I0802 18:38:13.489192   48425 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:38:13.530082   48425 logs.go:123] Gathering logs for kubelet ...
	I0802 18:38:13.530120   48425 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:38:13.597671   48425 logs.go:123] Gathering logs for dmesg ...
	I0802 18:38:13.597706   48425 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0802 18:38:13.628852   48425 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0802 18:38:13.628903   48425 out.go:239] * 
	* 
	W0802 18:38:13.628974   48425 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:38:13.629008   48425 out.go:239] * 
	* 
	W0802 18:38:13.630032   48425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:38:13.634109   48425 out.go:177] 
	W0802 18:38:13.635602   48425 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:38:13.635667   48425 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0802 18:38:13.635750   48425 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0802 18:38:13.637247   48425 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-132946 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-132946
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-132946: (6.328381695s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-132946 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-132946 status --format={{.Host}}: exit status 7 (75.802856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-132946 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-132946 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.12958165s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-132946 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-132946 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-132946 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.887897ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-132946] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-132946
	    minikube start -p kubernetes-upgrade-132946 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1329462 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-132946 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-132946 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-132946 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (5m49.809739035s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-02 18:45:03.180378453 +0000 UTC m=+4724.298546072
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-132946 -n kubernetes-upgrade-132946
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-132946 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cilium-800809                                       | cilium-800809                | jenkins | v1.33.1 | 02 Aug 24 18:37 UTC | 02 Aug 24 18:37 UTC |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:37 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:38 UTC |
	| start   | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | cert-options-643429 ssh                                | cert-options-643429          | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:38 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-643429 -- sudo                         | cert-options-643429          | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:38 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-643429                                 | cert-options-643429          | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:38 UTC |
	| start   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:39 UTC | 02 Aug 24 18:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-139745                              | cert-expiration-139745       | jenkins | v1.33.1 | 02 Aug 24 18:40 UTC | 02 Aug 24 18:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-139745                              | cert-expiration-139745       | jenkins | v1.33.1 | 02 Aug 24 18:40 UTC | 02 Aug 24 18:40 UTC |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:40 UTC | 02 Aug 24 18:42 UTC |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-407306             | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:41 UTC | 02 Aug 24 18:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-504903  | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:42 UTC | 02 Aug 24 18:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:42 UTC |                     |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-490984        | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407306                  | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490984             | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504903       | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:44:49
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:44:49.319247   58864 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:44:49.319526   58864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:44:49.319535   58864 out.go:304] Setting ErrFile to fd 2...
	I0802 18:44:49.319539   58864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:44:49.319708   58864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:44:49.320213   58864 out.go:298] Setting JSON to false
	I0802 18:44:49.321094   58864 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5233,"bootTime":1722619056,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:44:49.321146   58864 start.go:139] virtualization: kvm guest
	I0802 18:44:49.323436   58864 out.go:177] * [default-k8s-diff-port-504903] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:44:49.324889   58864 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:44:49.324888   58864 notify.go:220] Checking for updates...
	I0802 18:44:49.327819   58864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:44:49.329225   58864 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:44:49.330538   58864 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:44:49.331829   58864 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:44:49.333301   58864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:44:49.335188   58864 config.go:182] Loaded profile config "default-k8s-diff-port-504903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:44:49.335799   58864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:44:49.335860   58864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:44:49.350513   58864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45689
	I0802 18:44:49.350931   58864 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:44:49.351476   58864 main.go:141] libmachine: Using API Version  1
	I0802 18:44:49.351505   58864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:44:49.352023   58864 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:44:49.352208   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:44:49.352483   58864 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:44:49.352789   58864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:44:49.352830   58864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:44:49.367474   58864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I0802 18:44:49.367938   58864 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:44:49.368429   58864 main.go:141] libmachine: Using API Version  1
	I0802 18:44:49.368450   58864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:44:49.368745   58864 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:44:49.368981   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:44:49.404702   58864 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:44:49.406248   58864 start.go:297] selected driver: kvm2
	I0802 18:44:49.406264   58864 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-504903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-504903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:44:49.406408   58864 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:44:49.407324   58864 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:44:49.407417   58864 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:44:49.422190   58864 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:44:49.422552   58864 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:44:49.422613   58864 cni.go:84] Creating CNI manager for ""
	I0802 18:44:49.422626   58864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:44:49.422663   58864 start.go:340] cluster config:
	{Name:default-k8s-diff-port-504903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-504903 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:44:49.422751   58864 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:44:49.424749   58864 out.go:177] * Starting "default-k8s-diff-port-504903" primary control-plane node in "default-k8s-diff-port-504903" cluster
	I0802 18:44:50.035220   56263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:44:50.048421   56263 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:44:50.048505   56263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:44:50.091950   56263 cri.go:89] found id: "eab4e4c001dd1655df1cd50475bded2755a2532ce67e2fea2f901c230d139a76"
	I0802 18:44:50.091971   56263 cri.go:89] found id: ""
	I0802 18:44:50.091978   56263 logs.go:276] 1 containers: [eab4e4c001dd1655df1cd50475bded2755a2532ce67e2fea2f901c230d139a76]
	I0802 18:44:50.092035   56263 ssh_runner.go:195] Run: which crictl
	I0802 18:44:50.095860   56263 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:44:50.095922   56263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:44:50.127458   56263 cri.go:89] found id: "86a1ba31250201f8b9a094dee4730acf5c790de66d97d6db87fa42e852d53f33"
	I0802 18:44:50.127483   56263 cri.go:89] found id: "15de59e452c68e2679547c0423e63e5e7de9167312b262eb91a49d8a1880d9ca"
	I0802 18:44:50.127490   56263 cri.go:89] found id: ""
	I0802 18:44:50.127498   56263 logs.go:276] 2 containers: [86a1ba31250201f8b9a094dee4730acf5c790de66d97d6db87fa42e852d53f33 15de59e452c68e2679547c0423e63e5e7de9167312b262eb91a49d8a1880d9ca]
	I0802 18:44:50.127566   56263 ssh_runner.go:195] Run: which crictl
	I0802 18:44:50.131416   56263 ssh_runner.go:195] Run: which crictl
	I0802 18:44:50.134989   56263 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:44:50.135045   56263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:44:50.167145   56263 cri.go:89] found id: ""
	I0802 18:44:50.167172   56263 logs.go:276] 0 containers: []
	W0802 18:44:50.167182   56263 logs.go:278] No container was found matching "coredns"
	I0802 18:44:50.167189   56263 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:44:50.167249   56263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:44:50.199534   56263 cri.go:89] found id: "2e6e1d562e2868c4cd599d6a999eb4081ee94d202a2c450aab4db1330c9380b1"
	I0802 18:44:50.199556   56263 cri.go:89] found id: "8ba04e03e9f13ddfe599795b142f5d39f3c599b90596fa5a146aaeb7b7af91d0"
	I0802 18:44:50.199561   56263 cri.go:89] found id: ""
	I0802 18:44:50.199569   56263 logs.go:276] 2 containers: [2e6e1d562e2868c4cd599d6a999eb4081ee94d202a2c450aab4db1330c9380b1 8ba04e03e9f13ddfe599795b142f5d39f3c599b90596fa5a146aaeb7b7af91d0]
	I0802 18:44:50.199625   56263 ssh_runner.go:195] Run: which crictl
	I0802 18:44:50.203250   56263 ssh_runner.go:195] Run: which crictl
	I0802 18:44:50.206949   56263 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:44:50.207012   56263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:44:50.245928   56263 cri.go:89] found id: ""
	I0802 18:44:50.245959   56263 logs.go:276] 0 containers: []
	W0802 18:44:50.245967   56263 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:44:50.245972   56263 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:44:50.246020   56263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:44:50.284253   56263 cri.go:89] found id: "7ec9f2d116722e36aafb3b3988805c538aa096895bf5e99390b3c56fc1c1bfbb"
	I0802 18:44:50.284280   56263 cri.go:89] found id: ""
	I0802 18:44:50.284289   56263 logs.go:276] 1 containers: [7ec9f2d116722e36aafb3b3988805c538aa096895bf5e99390b3c56fc1c1bfbb]
	I0802 18:44:50.284349   56263 ssh_runner.go:195] Run: which crictl
	I0802 18:44:50.287972   56263 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:44:50.288030   56263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:44:50.319463   56263 cri.go:89] found id: ""
	I0802 18:44:50.319492   56263 logs.go:276] 0 containers: []
	W0802 18:44:50.319502   56263 logs.go:278] No container was found matching "kindnet"
	I0802 18:44:50.319508   56263 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0802 18:44:50.319563   56263 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0802 18:44:50.352283   56263 cri.go:89] found id: ""
	I0802 18:44:50.352311   56263 logs.go:276] 0 containers: []
	W0802 18:44:50.352328   56263 logs.go:278] No container was found matching "storage-provisioner"
	I0802 18:44:50.352338   56263 logs.go:123] Gathering logs for etcd [86a1ba31250201f8b9a094dee4730acf5c790de66d97d6db87fa42e852d53f33] ...
	I0802 18:44:50.352360   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 86a1ba31250201f8b9a094dee4730acf5c790de66d97d6db87fa42e852d53f33"
	I0802 18:44:50.392360   56263 logs.go:123] Gathering logs for etcd [15de59e452c68e2679547c0423e63e5e7de9167312b262eb91a49d8a1880d9ca] ...
	I0802 18:44:50.392394   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15de59e452c68e2679547c0423e63e5e7de9167312b262eb91a49d8a1880d9ca"
	I0802 18:44:50.431318   56263 logs.go:123] Gathering logs for kube-scheduler [2e6e1d562e2868c4cd599d6a999eb4081ee94d202a2c450aab4db1330c9380b1] ...
	I0802 18:44:50.431354   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e6e1d562e2868c4cd599d6a999eb4081ee94d202a2c450aab4db1330c9380b1"
	I0802 18:44:50.505201   56263 logs.go:123] Gathering logs for kubelet ...
	I0802 18:44:50.505236   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:44:50.619666   56263 logs.go:123] Gathering logs for kube-apiserver [eab4e4c001dd1655df1cd50475bded2755a2532ce67e2fea2f901c230d139a76] ...
	I0802 18:44:50.619701   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eab4e4c001dd1655df1cd50475bded2755a2532ce67e2fea2f901c230d139a76"
	I0802 18:44:50.666264   56263 logs.go:123] Gathering logs for kube-scheduler [8ba04e03e9f13ddfe599795b142f5d39f3c599b90596fa5a146aaeb7b7af91d0] ...
	I0802 18:44:50.666299   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ba04e03e9f13ddfe599795b142f5d39f3c599b90596fa5a146aaeb7b7af91d0"
	I0802 18:44:50.706286   56263 logs.go:123] Gathering logs for kube-controller-manager [7ec9f2d116722e36aafb3b3988805c538aa096895bf5e99390b3c56fc1c1bfbb] ...
	I0802 18:44:50.706312   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ec9f2d116722e36aafb3b3988805c538aa096895bf5e99390b3c56fc1c1bfbb"
	I0802 18:44:50.748607   56263 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:44:50.748638   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:44:50.983377   56263 logs.go:123] Gathering logs for container status ...
	I0802 18:44:50.983410   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:44:51.029426   56263 logs.go:123] Gathering logs for dmesg ...
	I0802 18:44:51.029454   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:44:51.044235   56263 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:44:51.044263   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:44:51.112133   56263 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:44:49.426049   58864 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:44:49.426092   58864 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 18:44:49.426115   58864 cache.go:56] Caching tarball of preloaded images
	I0802 18:44:49.426186   58864 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:44:49.426196   58864 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 18:44:49.426299   58864 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/config.json ...
	I0802 18:44:49.426471   58864 start.go:360] acquireMachinesLock for default-k8s-diff-port-504903: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:44:52.699370   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:55.771425   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:53.612764   56263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:44:53.626139   56263 kubeadm.go:597] duration metric: took 4m3.230182335s to restartPrimaryControlPlane
	W0802 18:44:53.626206   56263 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0802 18:44:53.626237   56263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0802 18:44:54.437252   56263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:44:54.451897   56263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 18:44:54.461623   56263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:44:54.471072   56263 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:44:54.471092   56263 kubeadm.go:157] found existing configuration files:
	
	I0802 18:44:54.471158   56263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:44:54.479756   56263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:44:54.479815   56263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:44:54.488739   56263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:44:54.497524   56263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:44:54.497590   56263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:44:54.506463   56263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:44:54.514821   56263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:44:54.514887   56263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:44:54.523742   56263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:44:54.532071   56263 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:44:54.532134   56263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:44:54.541042   56263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 18:44:54.582132   56263 kubeadm.go:310] W0802 18:44:54.571665    7657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0802 18:44:54.583914   56263 kubeadm.go:310] W0802 18:44:54.573421    7657 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0802 18:44:54.688593   56263 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 18:45:02.239189   56263 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0802 18:45:02.239264   56263 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:45:02.239377   56263 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:45:02.239536   56263 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:45:02.239629   56263 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0802 18:45:02.239685   56263 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:45:02.241051   56263 out.go:204]   - Generating certificates and keys ...
	I0802 18:45:02.241121   56263 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:45:02.241202   56263 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:45:02.241272   56263 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 18:45:02.241337   56263 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 18:45:02.241394   56263 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 18:45:02.241446   56263 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 18:45:02.241507   56263 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 18:45:02.241564   56263 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 18:45:02.241648   56263 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 18:45:02.241713   56263 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 18:45:02.241750   56263 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 18:45:02.241811   56263 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:45:02.241866   56263 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:45:02.241918   56263 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 18:45:02.241962   56263 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:45:02.242016   56263 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:45:02.242069   56263 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:45:02.242168   56263 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:45:02.242267   56263 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:45:02.243726   56263 out.go:204]   - Booting up control plane ...
	I0802 18:45:02.243818   56263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:45:02.243903   56263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:45:02.243999   56263 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:45:02.244113   56263 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:45:02.244242   56263 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:45:02.244304   56263 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:45:02.244493   56263 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 18:45:02.244594   56263 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0802 18:45:02.244646   56263 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.074698ms
	I0802 18:45:02.244705   56263 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 18:45:02.244759   56263 kubeadm.go:310] [api-check] The API server is healthy after 5.001466591s
	I0802 18:45:02.244851   56263 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 18:45:02.244955   56263 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 18:45:02.245003   56263 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 18:45:02.245170   56263 kubeadm.go:310] [mark-control-plane] Marking the node kubernetes-upgrade-132946 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 18:45:02.245225   56263 kubeadm.go:310] [bootstrap-token] Using token: 2gv03l.pecwsmfmz9k3l1cl
	I0802 18:45:02.247427   56263 out.go:204]   - Configuring RBAC rules ...
	I0802 18:45:02.247542   56263 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 18:45:02.247633   56263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 18:45:02.247774   56263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 18:45:02.247938   56263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 18:45:02.248095   56263 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 18:45:02.248210   56263 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 18:45:02.248327   56263 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 18:45:02.248365   56263 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 18:45:02.248405   56263 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 18:45:02.248411   56263 kubeadm.go:310] 
	I0802 18:45:02.248479   56263 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 18:45:02.248489   56263 kubeadm.go:310] 
	I0802 18:45:02.248578   56263 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 18:45:02.248585   56263 kubeadm.go:310] 
	I0802 18:45:02.248610   56263 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 18:45:02.248659   56263 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 18:45:02.248714   56263 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 18:45:02.248725   56263 kubeadm.go:310] 
	I0802 18:45:02.248790   56263 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 18:45:02.248801   56263 kubeadm.go:310] 
	I0802 18:45:02.248839   56263 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 18:45:02.248845   56263 kubeadm.go:310] 
	I0802 18:45:02.248886   56263 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 18:45:02.248952   56263 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 18:45:02.249008   56263 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 18:45:02.249017   56263 kubeadm.go:310] 
	I0802 18:45:02.249090   56263 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 18:45:02.249173   56263 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 18:45:02.249181   56263 kubeadm.go:310] 
	I0802 18:45:02.249262   56263 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2gv03l.pecwsmfmz9k3l1cl \
	I0802 18:45:02.249390   56263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 18:45:02.249420   56263 kubeadm.go:310] 	--control-plane 
	I0802 18:45:02.249430   56263 kubeadm.go:310] 
	I0802 18:45:02.249531   56263 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 18:45:02.249538   56263 kubeadm.go:310] 
	I0802 18:45:02.249632   56263 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2gv03l.pecwsmfmz9k3l1cl \
	I0802 18:45:02.249740   56263 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 18:45:02.249755   56263 cni.go:84] Creating CNI manager for ""
	I0802 18:45:02.249762   56263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:45:02.251918   56263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 18:45:02.253210   56263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 18:45:02.264365   56263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 18:45:02.282436   56263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 18:45:02.282499   56263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 18:45:02.282574   56263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-132946 minikube.k8s.io/updated_at=2024_08_02T18_45_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=kubernetes-upgrade-132946 minikube.k8s.io/primary=true
	I0802 18:45:02.309903   56263 ops.go:34] apiserver oom_adj: -16
	I0802 18:45:02.403146   56263 kubeadm.go:1113] duration metric: took 120.708328ms to wait for elevateKubeSystemPrivileges
	I0802 18:45:02.427071   56263 kubeadm.go:394] duration metric: took 4m12.112456871s to StartCluster
	I0802 18:45:02.427142   56263 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:45:02.427234   56263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:45:02.428384   56263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:45:02.428594   56263 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.113 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 18:45:02.428655   56263 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 18:45:02.428732   56263 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-132946"
	I0802 18:45:02.428753   56263 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-132946"
	I0802 18:45:02.428800   56263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-132946"
	I0802 18:45:02.428801   56263 config.go:182] Loaded profile config "kubernetes-upgrade-132946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 18:45:02.428762   56263 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-132946"
	W0802 18:45:02.428855   56263 addons.go:243] addon storage-provisioner should already be in state true
	I0802 18:45:02.428886   56263 host.go:66] Checking if "kubernetes-upgrade-132946" exists ...
	I0802 18:45:02.429107   56263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:45:02.429159   56263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:45:02.429176   56263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:45:02.429205   56263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:45:02.429987   56263 out.go:177] * Verifying Kubernetes components...
	I0802 18:45:02.431498   56263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:45:02.444878   56263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46685
	I0802 18:45:02.444878   56263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33035
	I0802 18:45:02.445360   56263 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:45:02.445497   56263 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:45:02.445898   56263 main.go:141] libmachine: Using API Version  1
	I0802 18:45:02.445922   56263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:45:02.445993   56263 main.go:141] libmachine: Using API Version  1
	I0802 18:45:02.446017   56263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:45:02.446326   56263 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:45:02.446345   56263 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:45:02.446536   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetState
	I0802 18:45:02.446819   56263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:45:02.446845   56263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:45:02.449405   56263 kapi.go:59] client config for kubernetes-upgrade-132946: &rest.Config{Host:"https://192.168.72.113:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/client.crt", KeyFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kubernetes-upgrade-132946/client.key", CAFile:"/home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0802 18:45:02.449833   56263 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-132946"
	W0802 18:45:02.449852   56263 addons.go:243] addon default-storageclass should already be in state true
	I0802 18:45:02.449880   56263 host.go:66] Checking if "kubernetes-upgrade-132946" exists ...
	I0802 18:45:02.450235   56263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:45:02.450266   56263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:45:02.464810   56263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I0802 18:45:02.465226   56263 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:45:02.465654   56263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40497
	I0802 18:45:02.465770   56263 main.go:141] libmachine: Using API Version  1
	I0802 18:45:02.465789   56263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:45:02.465980   56263 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:45:02.466126   56263 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:45:02.466408   56263 main.go:141] libmachine: Using API Version  1
	I0802 18:45:02.466424   56263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:45:02.466603   56263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:45:02.466628   56263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:45:02.466703   56263 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:45:02.466934   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetState
	I0802 18:45:02.469138   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .DriverName
	I0802 18:45:02.471022   56263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:45:02.472391   56263 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 18:45:02.472404   56263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 18:45:02.472419   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:45:02.475473   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:45:02.475950   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:38:41 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:45:02.475976   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:45:02.476159   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:45:02.476363   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:45:02.476562   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:45:02.476719   56263 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/id_rsa Username:docker}
	I0802 18:45:02.482984   56263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44213
	I0802 18:45:02.483413   56263 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:45:02.483799   56263 main.go:141] libmachine: Using API Version  1
	I0802 18:45:02.483819   56263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:45:02.484120   56263 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:45:02.484271   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetState
	I0802 18:45:02.485821   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .DriverName
	I0802 18:45:02.486028   56263 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 18:45:02.486039   56263 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 18:45:02.486052   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHHostname
	I0802 18:45:02.489075   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:45:02.489546   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:a0:7e", ip: ""} in network mk-kubernetes-upgrade-132946: {Iface:virbr4 ExpiryTime:2024-08-02 19:38:41 +0000 UTC Type:0 Mac:52:54:00:af:a0:7e Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:kubernetes-upgrade-132946 Clientid:01:52:54:00:af:a0:7e}
	I0802 18:45:02.489583   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | domain kubernetes-upgrade-132946 has defined IP address 192.168.72.113 and MAC address 52:54:00:af:a0:7e in network mk-kubernetes-upgrade-132946
	I0802 18:45:02.489747   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHPort
	I0802 18:45:02.489917   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHKeyPath
	I0802 18:45:02.490077   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .GetSSHUsername
	I0802 18:45:02.490204   56263 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/kubernetes-upgrade-132946/id_rsa Username:docker}
	I0802 18:45:02.588716   56263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:45:02.615925   56263 api_server.go:52] waiting for apiserver process to appear ...
	I0802 18:45:02.616008   56263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:45:02.634734   56263 api_server.go:72] duration metric: took 206.106501ms to wait for apiserver process to appear ...
	I0802 18:45:02.634761   56263 api_server.go:88] waiting for apiserver healthz status ...
	I0802 18:45:02.634784   56263 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8443/healthz ...
	I0802 18:45:02.643637   56263 api_server.go:279] https://192.168.72.113:8443/healthz returned 200:
	ok
	I0802 18:45:02.653102   56263 api_server.go:141] control plane version: v1.31.0-rc.0
	I0802 18:45:02.653136   56263 api_server.go:131] duration metric: took 18.368497ms to wait for apiserver health ...
	I0802 18:45:02.653154   56263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 18:45:02.661402   56263 system_pods.go:59] 4 kube-system pods found
	I0802 18:45:02.661438   56263 system_pods.go:61] "etcd-kubernetes-upgrade-132946" [d1260106-d745-4990-9743-47ce9044cb83] Running
	I0802 18:45:02.661444   56263 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-132946" [3fc94e67-aca6-4292-b908-53a757549f2e] Pending
	I0802 18:45:02.661451   56263 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-132946" [6b061112-8279-4e88-a517-8b53fd5fe52b] Pending
	I0802 18:45:02.661456   56263 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-132946" [7966b46f-bf0a-45fb-905c-00146a44a28f] Running
	I0802 18:45:02.661463   56263 system_pods.go:74] duration metric: took 8.301799ms to wait for pod list to return data ...
	I0802 18:45:02.661474   56263 kubeadm.go:582] duration metric: took 232.850053ms to wait for: map[apiserver:true system_pods:true]
	I0802 18:45:02.661489   56263 node_conditions.go:102] verifying NodePressure condition ...
	I0802 18:45:02.668539   56263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 18:45:02.668560   56263 node_conditions.go:123] node cpu capacity is 2
	I0802 18:45:02.668581   56263 node_conditions.go:105] duration metric: took 7.079642ms to run NodePressure ...
	I0802 18:45:02.668592   56263 start.go:241] waiting for startup goroutines ...
	I0802 18:45:02.690539   56263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 18:45:02.761623   56263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 18:45:02.837475   56263 main.go:141] libmachine: Making call to close driver server
	I0802 18:45:02.837513   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .Close
	I0802 18:45:02.837809   56263 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:45:02.837825   56263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:45:02.837834   56263 main.go:141] libmachine: Making call to close driver server
	I0802 18:45:02.837841   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .Close
	I0802 18:45:02.837843   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Closing plugin on server side
	I0802 18:45:02.838074   56263 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:45:02.838090   56263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:45:02.838148   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Closing plugin on server side
	I0802 18:45:02.848322   56263 main.go:141] libmachine: Making call to close driver server
	I0802 18:45:02.848346   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .Close
	I0802 18:45:02.848648   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Closing plugin on server side
	I0802 18:45:02.848684   56263 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:45:02.848693   56263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:45:03.100457   56263 main.go:141] libmachine: Making call to close driver server
	I0802 18:45:03.100481   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .Close
	I0802 18:45:03.100767   56263 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:45:03.100783   56263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:45:03.100793   56263 main.go:141] libmachine: Making call to close driver server
	I0802 18:45:03.100806   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) Calling .Close
	I0802 18:45:03.100805   56263 main.go:141] libmachine: (kubernetes-upgrade-132946) DBG | Closing plugin on server side
	I0802 18:45:03.101012   56263 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:45:03.101025   56263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:45:03.102775   56263 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0802 18:45:03.103978   56263 addons.go:510] duration metric: took 675.323112ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0802 18:45:03.104016   56263 start.go:246] waiting for cluster config update ...
	I0802 18:45:03.104030   56263 start.go:255] writing updated cluster config ...
	I0802 18:45:03.104271   56263 ssh_runner.go:195] Run: rm -f paused
	I0802 18:45:03.165884   56263 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0802 18:45:03.167636   56263 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-132946" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.830170212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722624303830145256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8cca5853-6e6f-4a98-b085-665aa22c5305 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.830697012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=303f599d-bb1e-4893-b8b7-3fb030ca1544 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.830775325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=303f599d-bb1e-4893-b8b7-3fb030ca1544 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.830982267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd19e9fe1a10eea5d5ea29d39031d9be75cf0a377a1e7e842c4a0c5b23b1e96b,PodSandboxId:ac65f9a74e147d164f96c56f16d2c4a705042fbe80a87788ca0fe62de3ee6a80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722624296365473162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2abf6bdd3c2c8d30857bac0b13b77b8,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb44c083f8c0612df3c3904ded108e44ea16fc5b8d60f762f61c3aa059b1fad,PodSandboxId:483685d3bcc7fa927fd094c4adbe3504697846f4a6e4b1ff877f53c637944d82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722624296343972601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fec0498aa089fea2ed8f5db80a9f9fa3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.contai
ner.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ab73d8844696a62b526d22facac41958309304b33176d38d3c40d8ff90b066,PodSandboxId:b63a548d6d6734c00c851b78b4ecdfa6468a683d44bc5ebeb43635965bcc60d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722624296306319675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6b727ec1c56f0a19d050e3cbdc511f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7b0e2f4bb547884084daf5670089dfd86c9b3355ca20eb95fafd23c0d3b95e,PodSandboxId:9e6d01203f0c343ed5697e0ffac748e563bbcf33fab8dd63a945106fe56e6912,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722624296262922782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d09d6b5aba56f54181e828de5152b17,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=303f599d-bb1e-4893-b8b7-3fb030ca1544 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.869385738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f3af9cf-9788-4dc5-854f-0fc588eead6d name=/runtime.v1.RuntimeService/Version
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.869482840Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f3af9cf-9788-4dc5-854f-0fc588eead6d name=/runtime.v1.RuntimeService/Version
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.872740119Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4318969-21ea-425c-808f-e6451b16a376 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.873559588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722624303873526712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4318969-21ea-425c-808f-e6451b16a376 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.876373704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a19c9c13-5f4e-4a80-8780-04432db61394 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.876448645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a19c9c13-5f4e-4a80-8780-04432db61394 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.876597631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd19e9fe1a10eea5d5ea29d39031d9be75cf0a377a1e7e842c4a0c5b23b1e96b,PodSandboxId:ac65f9a74e147d164f96c56f16d2c4a705042fbe80a87788ca0fe62de3ee6a80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722624296365473162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2abf6bdd3c2c8d30857bac0b13b77b8,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb44c083f8c0612df3c3904ded108e44ea16fc5b8d60f762f61c3aa059b1fad,PodSandboxId:483685d3bcc7fa927fd094c4adbe3504697846f4a6e4b1ff877f53c637944d82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722624296343972601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fec0498aa089fea2ed8f5db80a9f9fa3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.contai
ner.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ab73d8844696a62b526d22facac41958309304b33176d38d3c40d8ff90b066,PodSandboxId:b63a548d6d6734c00c851b78b4ecdfa6468a683d44bc5ebeb43635965bcc60d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722624296306319675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6b727ec1c56f0a19d050e3cbdc511f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7b0e2f4bb547884084daf5670089dfd86c9b3355ca20eb95fafd23c0d3b95e,PodSandboxId:9e6d01203f0c343ed5697e0ffac748e563bbcf33fab8dd63a945106fe56e6912,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722624296262922782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d09d6b5aba56f54181e828de5152b17,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a19c9c13-5f4e-4a80-8780-04432db61394 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.911011512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03c87997-443d-4dc4-9077-9f7d18758234 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.911128742Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03c87997-443d-4dc4-9077-9f7d18758234 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.913125128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=477d0473-6bc2-4346-952e-e06d6faab1de name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.913485966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722624303913464938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=477d0473-6bc2-4346-952e-e06d6faab1de name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.914032231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=148f81b9-2b5d-433e-9627-44ee88cec8bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.914087528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=148f81b9-2b5d-433e-9627-44ee88cec8bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.914215866Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd19e9fe1a10eea5d5ea29d39031d9be75cf0a377a1e7e842c4a0c5b23b1e96b,PodSandboxId:ac65f9a74e147d164f96c56f16d2c4a705042fbe80a87788ca0fe62de3ee6a80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722624296365473162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2abf6bdd3c2c8d30857bac0b13b77b8,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb44c083f8c0612df3c3904ded108e44ea16fc5b8d60f762f61c3aa059b1fad,PodSandboxId:483685d3bcc7fa927fd094c4adbe3504697846f4a6e4b1ff877f53c637944d82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722624296343972601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fec0498aa089fea2ed8f5db80a9f9fa3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.contai
ner.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ab73d8844696a62b526d22facac41958309304b33176d38d3c40d8ff90b066,PodSandboxId:b63a548d6d6734c00c851b78b4ecdfa6468a683d44bc5ebeb43635965bcc60d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722624296306319675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6b727ec1c56f0a19d050e3cbdc511f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7b0e2f4bb547884084daf5670089dfd86c9b3355ca20eb95fafd23c0d3b95e,PodSandboxId:9e6d01203f0c343ed5697e0ffac748e563bbcf33fab8dd63a945106fe56e6912,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722624296262922782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d09d6b5aba56f54181e828de5152b17,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=148f81b9-2b5d-433e-9627-44ee88cec8bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.948181981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3119117b-4df8-4ed9-bde5-b8bee2f6b120 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.948271711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3119117b-4df8-4ed9-bde5-b8bee2f6b120 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.949407531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d88370d1-f358-4fa8-bd0b-a929ede599fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.949764835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722624303949741658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d88370d1-f358-4fa8-bd0b-a929ede599fb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.950427195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7667174e-719c-487d-89cd-a348d673ab68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.950481568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7667174e-719c-487d-89cd-a348d673ab68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:45:03 kubernetes-upgrade-132946 crio[1834]: time="2024-08-02 18:45:03.950591450Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd19e9fe1a10eea5d5ea29d39031d9be75cf0a377a1e7e842c4a0c5b23b1e96b,PodSandboxId:ac65f9a74e147d164f96c56f16d2c4a705042fbe80a87788ca0fe62de3ee6a80,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722624296365473162,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2abf6bdd3c2c8d30857bac0b13b77b8,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fb44c083f8c0612df3c3904ded108e44ea16fc5b8d60f762f61c3aa059b1fad,PodSandboxId:483685d3bcc7fa927fd094c4adbe3504697846f4a6e4b1ff877f53c637944d82,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722624296343972601,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fec0498aa089fea2ed8f5db80a9f9fa3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.contai
ner.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47ab73d8844696a62b526d22facac41958309304b33176d38d3c40d8ff90b066,PodSandboxId:b63a548d6d6734c00c851b78b4ecdfa6468a683d44bc5ebeb43635965bcc60d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722624296306319675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6b727ec1c56f0a19d050e3cbdc511f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7b0e2f4bb547884084daf5670089dfd86c9b3355ca20eb95fafd23c0d3b95e,PodSandboxId:9e6d01203f0c343ed5697e0ffac748e563bbcf33fab8dd63a945106fe56e6912,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722624296262922782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-132946,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d09d6b5aba56f54181e828de5152b17,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 1,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7667174e-719c-487d-89cd-a348d673ab68 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cd19e9fe1a10e       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   7 seconds ago       Running             kube-scheduler            3                   ac65f9a74e147       kube-scheduler-kubernetes-upgrade-132946
	3fb44c083f8c0       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   7 seconds ago       Running             kube-controller-manager   1                   483685d3bcc7f       kube-controller-manager-kubernetes-upgrade-132946
	47ab73d884469       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      3                   b63a548d6d673       etcd-kubernetes-upgrade-132946
	8c7b0e2f4bb54       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   7 seconds ago       Running             kube-apiserver            1                   9e6d01203f0c3       kube-apiserver-kubernetes-upgrade-132946
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-132946
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-132946
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=kubernetes-upgrade-132946
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T18_45_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:44:58 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-132946
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:45:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 18:45:01 +0000   Fri, 02 Aug 2024 18:44:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 18:45:01 +0000   Fri, 02 Aug 2024 18:44:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 18:45:01 +0000   Fri, 02 Aug 2024 18:44:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 18:45:01 +0000   Fri, 02 Aug 2024 18:44:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.113
	  Hostname:    kubernetes-upgrade-132946
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4257a09038c245978a43e750fa949059
	  System UUID:                4257a090-38c2-4597-8a43-e750fa949059
	  Boot ID:                    0529ec7b-5656-42a0-ae78-2c9535b5a743
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-132946                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-kubernetes-upgrade-132946             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-132946    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-kubernetes-upgrade-132946             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 3s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node kubernetes-upgrade-132946 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node kubernetes-upgrade-132946 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node kubernetes-upgrade-132946 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.056239] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059651] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.160325] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.139250] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +6.715941] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.068293] kauditd_printk_skb: 102 callbacks suppressed
	[Aug 2 18:39] systemd-fstab-generator[737]: Ignoring "noauto" option for root device
	[  +0.065690] kauditd_printk_skb: 18 callbacks suppressed
	[  +2.069796] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +5.861242] kauditd_printk_skb: 97 callbacks suppressed
	[  +1.064026] systemd-fstab-generator[1243]: Ignoring "noauto" option for root device
	[  +3.682524] systemd-fstab-generator[1625]: Ignoring "noauto" option for root device
	[  +0.211538] systemd-fstab-generator[1693]: Ignoring "noauto" option for root device
	[  +0.089151] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.112755] systemd-fstab-generator[1730]: Ignoring "noauto" option for root device
	[  +0.171954] systemd-fstab-generator[1742]: Ignoring "noauto" option for root device
	[  +0.615397] systemd-fstab-generator[1770]: Ignoring "noauto" option for root device
	[Aug 2 18:40] kauditd_printk_skb: 78 callbacks suppressed
	[  +0.698449] systemd-fstab-generator[1978]: Ignoring "noauto" option for root device
	[  +2.181131] systemd-fstab-generator[2098]: Ignoring "noauto" option for root device
	[Aug 2 18:44] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.062483] systemd-fstab-generator[7684]: Ignoring "noauto" option for root device
	[Aug 2 18:45] systemd-fstab-generator[8006]: Ignoring "noauto" option for root device
	[  +0.072299] kauditd_printk_skb: 64 callbacks suppressed
	[  +1.137786] systemd-fstab-generator[8085]: Ignoring "noauto" option for root device
	
	
	==> etcd [47ab73d8844696a62b526d22facac41958309304b33176d38d3c40d8ff90b066] <==
	{"level":"info","ts":"2024-08-02T18:44:56.631834Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-02T18:44:56.632084Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6bf3317fd0e8dc60","initial-advertise-peer-urls":["https://192.168.72.113:2380"],"listen-peer-urls":["https://192.168.72.113:2380"],"advertise-client-urls":["https://192.168.72.113:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.113:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-02T18:44:56.632175Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-02T18:44:56.632268Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.113:2380"}
	{"level":"info","ts":"2024-08-02T18:44:56.632317Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.113:2380"}
	{"level":"info","ts":"2024-08-02T18:44:57.374000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-02T18:44:57.374189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-02T18:44:57.374265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 received MsgPreVoteResp from 6bf3317fd0e8dc60 at term 1"}
	{"level":"info","ts":"2024-08-02T18:44:57.374312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 became candidate at term 2"}
	{"level":"info","ts":"2024-08-02T18:44:57.374342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 received MsgVoteResp from 6bf3317fd0e8dc60 at term 2"}
	{"level":"info","ts":"2024-08-02T18:44:57.374417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6bf3317fd0e8dc60 became leader at term 2"}
	{"level":"info","ts":"2024-08-02T18:44:57.374443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6bf3317fd0e8dc60 elected leader 6bf3317fd0e8dc60 at term 2"}
	{"level":"info","ts":"2024-08-02T18:44:57.379148Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6bf3317fd0e8dc60","local-member-attributes":"{Name:kubernetes-upgrade-132946 ClientURLs:[https://192.168.72.113:2379]}","request-path":"/0/members/6bf3317fd0e8dc60/attributes","cluster-id":"19cf5c6a1483664a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-02T18:44:57.379405Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:44:57.379574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:44:57.380378Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:44:57.381967Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-02T18:44:57.389198Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-02T18:44:57.382542Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"19cf5c6a1483664a","local-member-id":"6bf3317fd0e8dc60","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:44:57.389413Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:44:57.389478Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-02T18:44:57.383242Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-02T18:44:57.390320Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-02T18:44:57.401775Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-02T18:44:57.404737Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.113:2379"}
	
	
	==> kernel <==
	 18:45:04 up 6 min,  0 users,  load average: 0.35, 0.17, 0.08
	Linux kubernetes-upgrade-132946 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8c7b0e2f4bb547884084daf5670089dfd86c9b3355ca20eb95fafd23c0d3b95e] <==
	I0802 18:44:58.763777       1 shared_informer.go:320] Caches are synced for configmaps
	I0802 18:44:58.763869       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0802 18:44:58.763875       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0802 18:44:58.764177       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0802 18:44:58.764220       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 18:44:58.764475       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0802 18:44:58.796100       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0802 18:44:58.796181       1 aggregator.go:171] initial CRD sync complete...
	I0802 18:44:58.796196       1 autoregister_controller.go:144] Starting autoregister controller
	I0802 18:44:58.796202       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0802 18:44:58.796207       1 cache.go:39] Caches are synced for autoregister controller
	I0802 18:44:58.815578       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 18:44:59.672832       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0802 18:44:59.679848       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0802 18:44:59.679984       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 18:45:00.355123       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 18:45:00.395592       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 18:45:00.474513       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0802 18:45:00.482523       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.113]
	I0802 18:45:00.484184       1 controller.go:615] quota admission added evaluator for: endpoints
	I0802 18:45:00.488633       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0802 18:45:00.727804       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0802 18:45:01.635819       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0802 18:45:01.652550       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0802 18:45:01.664922       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [3fb44c083f8c0612df3c3904ded108e44ea16fc5b8d60f762f61c3aa059b1fad] <==
	I0802 18:45:03.177866       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0802 18:45:03.177896       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0802 18:45:03.325583       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I0802 18:45:03.325663       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0802 18:45:03.325672       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0802 18:45:03.373717       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0802 18:45:03.373797       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0802 18:45:03.374440       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0802 18:45:03.374507       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0802 18:45:03.673817       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0802 18:45:03.673902       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0802 18:45:03.673911       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0802 18:45:03.826763       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I0802 18:45:03.826897       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0802 18:45:03.826911       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0802 18:45:03.975727       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I0802 18:45:03.975801       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0802 18:45:03.975810       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0802 18:45:04.128715       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I0802 18:45:04.128872       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I0802 18:45:04.128895       1 shared_informer.go:313] Waiting for caches to sync for job
	I0802 18:45:04.275344       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I0802 18:45:04.275426       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0802 18:45:04.275436       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0802 18:45:04.275444       1 shared_informer.go:320] Caches are synced for token_cleaner
	
	
	==> kube-scheduler [cd19e9fe1a10eea5d5ea29d39031d9be75cf0a377a1e7e842c4a0c5b23b1e96b] <==
	W0802 18:44:59.819733       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 18:44:59.819874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0802 18:44:59.856091       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 18:44:59.856180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0802 18:44:59.965350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0802 18:44:59.965396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0802 18:44:59.970506       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0802 18:44:59.970553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0802 18:44:59.970651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 18:44:59.970676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0802 18:44:59.973049       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 18:44:59.973102       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0802 18:45:00.026137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 18:45:00.026181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0802 18:45:00.047849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 18:45:00.048022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0802 18:45:00.056809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0802 18:45:00.056922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0802 18:45:00.068239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 18:45:00.068520       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0802 18:45:00.103626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 18:45:00.103849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0802 18:45:00.111922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 18:45:00.112035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0802 18:45:02.438164       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: E0802 18:45:01.575086    8012 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.659327    8012 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.692286    8012 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.692359    8012 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.808789    8012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/5e6b727ec1c56f0a19d050e3cbdc511f-etcd-data\") pod \"etcd-kubernetes-upgrade-132946\" (UID: \"5e6b727ec1c56f0a19d050e3cbdc511f\") " pod="kube-system/etcd-kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.808880    8012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d09d6b5aba56f54181e828de5152b17-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-132946\" (UID: \"6d09d6b5aba56f54181e828de5152b17\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.808909    8012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d09d6b5aba56f54181e828de5152b17-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-132946\" (UID: \"6d09d6b5aba56f54181e828de5152b17\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.809040    8012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d09d6b5aba56f54181e828de5152b17-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-132946\" (UID: \"6d09d6b5aba56f54181e828de5152b17\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.809076    8012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/5e6b727ec1c56f0a19d050e3cbdc511f-etcd-certs\") pod \"etcd-kubernetes-upgrade-132946\" (UID: \"5e6b727ec1c56f0a19d050e3cbdc511f\") " pod="kube-system/etcd-kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.809142    8012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec0498aa089fea2ed8f5db80a9f9fa3-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-132946\" (UID: \"fec0498aa089fea2ed8f5db80a9f9fa3\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.809218    8012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2abf6bdd3c2c8d30857bac0b13b77b8-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-132946\" (UID: \"e2abf6bdd3c2c8d30857bac0b13b77b8\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.809317    8012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec0498aa089fea2ed8f5db80a9f9fa3-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-132946\" (UID: \"fec0498aa089fea2ed8f5db80a9f9fa3\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.809356    8012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec0498aa089fea2ed8f5db80a9f9fa3-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-132946\" (UID: \"fec0498aa089fea2ed8f5db80a9f9fa3\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.809429    8012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec0498aa089fea2ed8f5db80a9f9fa3-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-132946\" (UID: \"fec0498aa089fea2ed8f5db80a9f9fa3\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-132946"
	Aug 02 18:45:01 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:01.809506    8012 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec0498aa089fea2ed8f5db80a9f9fa3-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-132946\" (UID: \"fec0498aa089fea2ed8f5db80a9f9fa3\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-132946"
	Aug 02 18:45:02 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:02.488085    8012 apiserver.go:52] "Watching apiserver"
	Aug 02 18:45:02 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:02.508240    8012 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 02 18:45:02 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:02.638427    8012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-kubernetes-upgrade-132946" podStartSLOduration=1.6383929259999999 podStartE2EDuration="1.638392926s" podCreationTimestamp="2024-08-02 18:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-02 18:45:02.619503574 +0000 UTC m=+1.215674637" watchObservedRunningTime="2024-08-02 18:45:02.638392926 +0000 UTC m=+1.234564005"
	Aug 02 18:45:02 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:02.650799    8012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-kubernetes-upgrade-132946" podStartSLOduration=1.650763756 podStartE2EDuration="1.650763756s" podCreationTimestamp="2024-08-02 18:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-02 18:45:02.638607282 +0000 UTC m=+1.234778355" watchObservedRunningTime="2024-08-02 18:45:02.650763756 +0000 UTC m=+1.246934829"
	Aug 02 18:45:02 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:02.670265    8012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-kubernetes-upgrade-132946" podStartSLOduration=1.670245158 podStartE2EDuration="1.670245158s" podCreationTimestamp="2024-08-02 18:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-02 18:45:02.652038905 +0000 UTC m=+1.248209984" watchObservedRunningTime="2024-08-02 18:45:02.670245158 +0000 UTC m=+1.266416239"
	Aug 02 18:45:02 kubernetes-upgrade-132946 kubelet[8012]: I0802 18:45:02.670365    8012 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-kubernetes-upgrade-132946" podStartSLOduration=1.670359429 podStartE2EDuration="1.670359429s" podCreationTimestamp="2024-08-02 18:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-02 18:45:02.670210197 +0000 UTC m=+1.266381275" watchObservedRunningTime="2024-08-02 18:45:02.670359429 +0000 UTC m=+1.266530510"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-132946 -n kubernetes-upgrade-132946
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-132946 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-132946 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-132946 describe pod storage-provisioner: exit status 1 (58.323991ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-132946 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-132946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-132946
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-132946: (1.108023463s)
--- FAIL: TestKubernetesUpgrade (729.32s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (77.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-455569 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-455569 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.680897581s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-455569] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-455569" primary control-plane node in "pause-455569" cluster
	* Updating the running kvm2 "pause-455569" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-455569" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:36:16.469721   51349 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:36:16.469991   51349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:36:16.470001   51349 out.go:304] Setting ErrFile to fd 2...
	I0802 18:36:16.470005   51349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:36:16.470189   51349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:36:16.470698   51349 out.go:298] Setting JSON to false
	I0802 18:36:16.471621   51349 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4720,"bootTime":1722619056,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:36:16.471679   51349 start.go:139] virtualization: kvm guest
	I0802 18:36:16.473886   51349 out.go:177] * [pause-455569] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:36:16.475225   51349 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:36:16.475248   51349 notify.go:220] Checking for updates...
	I0802 18:36:16.477753   51349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:36:16.478910   51349 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:36:16.480149   51349 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:36:16.481377   51349 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:36:16.482581   51349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:36:16.484416   51349 config.go:182] Loaded profile config "pause-455569": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:36:16.485055   51349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:36:16.485132   51349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:36:16.499885   51349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I0802 18:36:16.500270   51349 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:36:16.500820   51349 main.go:141] libmachine: Using API Version  1
	I0802 18:36:16.500845   51349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:36:16.501179   51349 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:36:16.501363   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:16.501638   51349 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:36:16.502064   51349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:36:16.502108   51349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:36:16.519125   51349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44779
	I0802 18:36:16.519684   51349 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:36:16.520259   51349 main.go:141] libmachine: Using API Version  1
	I0802 18:36:16.520284   51349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:36:16.520645   51349 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:36:16.520882   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:16.554826   51349 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:36:16.556153   51349 start.go:297] selected driver: kvm2
	I0802 18:36:16.556171   51349 start.go:901] validating driver "kvm2" against &{Name:pause-455569 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-455569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:36:16.556356   51349 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:36:16.556783   51349 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:36:16.556864   51349 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:36:16.571949   51349 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:36:16.572600   51349 cni.go:84] Creating CNI manager for ""
	I0802 18:36:16.572613   51349 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:36:16.572671   51349 start.go:340] cluster config:
	{Name:pause-455569 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-455569 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:36:16.572792   51349 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:36:16.574812   51349 out.go:177] * Starting "pause-455569" primary control-plane node in "pause-455569" cluster
	I0802 18:36:16.576103   51349 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:36:16.576137   51349 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 18:36:16.576145   51349 cache.go:56] Caching tarball of preloaded images
	I0802 18:36:16.576233   51349 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:36:16.576247   51349 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 18:36:16.576382   51349 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/config.json ...
	I0802 18:36:16.576576   51349 start.go:360] acquireMachinesLock for pause-455569: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:36:43.791860   51349 start.go:364] duration metric: took 27.21524605s to acquireMachinesLock for "pause-455569"
	I0802 18:36:43.791908   51349 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:36:43.791917   51349 fix.go:54] fixHost starting: 
	I0802 18:36:43.792307   51349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:36:43.792477   51349 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:36:43.810863   51349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38857
	I0802 18:36:43.811281   51349 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:36:43.811787   51349 main.go:141] libmachine: Using API Version  1
	I0802 18:36:43.811810   51349 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:36:43.812161   51349 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:36:43.812369   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:43.812528   51349 main.go:141] libmachine: (pause-455569) Calling .GetState
	I0802 18:36:43.814280   51349 fix.go:112] recreateIfNeeded on pause-455569: state=Running err=<nil>
	W0802 18:36:43.814297   51349 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:36:43.816079   51349 out.go:177] * Updating the running kvm2 "pause-455569" VM ...
	I0802 18:36:43.817189   51349 machine.go:94] provisionDockerMachine start ...
	I0802 18:36:43.817207   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:43.817390   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:43.819966   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.820376   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:43.820416   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.820548   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:43.820711   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.820843   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.820985   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:43.821138   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:43.821320   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:43.821339   51349 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:36:43.939817   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-455569
	
	I0802 18:36:43.939849   51349 main.go:141] libmachine: (pause-455569) Calling .GetMachineName
	I0802 18:36:43.940132   51349 buildroot.go:166] provisioning hostname "pause-455569"
	I0802 18:36:43.940158   51349 main.go:141] libmachine: (pause-455569) Calling .GetMachineName
	I0802 18:36:43.940385   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:43.943989   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.944498   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:43.944536   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.944705   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:43.944923   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.945109   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.945269   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:43.945424   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:43.945757   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:43.945790   51349 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-455569 && echo "pause-455569" | sudo tee /etc/hostname
	I0802 18:36:44.073861   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-455569
	
	I0802 18:36:44.073897   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:44.535132   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.535596   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:44.535637   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.535814   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:44.536076   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:44.536283   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:44.536432   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:44.536674   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:44.536910   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:44.536936   51349 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-455569' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-455569/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-455569' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:36:44.656659   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:36:44.656695   51349 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:36:44.656719   51349 buildroot.go:174] setting up certificates
	I0802 18:36:44.656735   51349 provision.go:84] configureAuth start
	I0802 18:36:44.656748   51349 main.go:141] libmachine: (pause-455569) Calling .GetMachineName
	I0802 18:36:44.657033   51349 main.go:141] libmachine: (pause-455569) Calling .GetIP
	I0802 18:36:44.660513   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.660915   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:44.660942   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.661141   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:44.663667   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.664015   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:44.664039   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.664219   51349 provision.go:143] copyHostCerts
	I0802 18:36:44.664288   51349 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:36:44.664299   51349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:36:44.664354   51349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:36:44.664475   51349 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:36:44.664488   51349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:36:44.664525   51349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:36:44.664610   51349 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:36:44.664621   51349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:36:44.664685   51349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:36:44.664757   51349 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.pause-455569 san=[127.0.0.1 192.168.39.26 localhost minikube pause-455569]
	I0802 18:36:45.112605   51349 provision.go:177] copyRemoteCerts
	I0802 18:36:45.112666   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:36:45.112688   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:45.115426   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.115750   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:45.115785   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.115899   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:45.116100   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:45.116263   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:45.116420   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:45.209422   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:36:45.234782   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0802 18:36:45.258977   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:36:45.288678   51349 provision.go:87] duration metric: took 631.931145ms to configureAuth
	I0802 18:36:45.288704   51349 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:36:45.288886   51349 config.go:182] Loaded profile config "pause-455569": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:36:45.288961   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:45.291523   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.291819   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:45.291854   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.291998   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:45.292208   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:45.292366   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:45.292492   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:45.292625   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:45.292804   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:45.292820   51349 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:36:53.116134   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:36:53.116161   51349 machine.go:97] duration metric: took 9.298960301s to provisionDockerMachine
	I0802 18:36:53.116175   51349 start.go:293] postStartSetup for "pause-455569" (driver="kvm2")
	I0802 18:36:53.116189   51349 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:36:53.116209   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.116697   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:36:53.116735   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.120256   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.120750   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.120785   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.120988   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.121169   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.121333   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.121531   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:53.213159   51349 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:36:53.217372   51349 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:36:53.217398   51349 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:36:53.217466   51349 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:36:53.217586   51349 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:36:53.217733   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:36:53.226713   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:36:53.253754   51349 start.go:296] duration metric: took 137.564126ms for postStartSetup
	I0802 18:36:53.253799   51349 fix.go:56] duration metric: took 9.461883705s for fixHost
	I0802 18:36:53.253823   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.256858   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.257245   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.257275   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.257499   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.257745   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.257961   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.258127   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.258342   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:53.258577   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:53.258593   51349 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0802 18:36:53.367640   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722623813.363097160
	
	I0802 18:36:53.367663   51349 fix.go:216] guest clock: 1722623813.363097160
	I0802 18:36:53.367670   51349 fix.go:229] Guest: 2024-08-02 18:36:53.36309716 +0000 UTC Remote: 2024-08-02 18:36:53.253804237 +0000 UTC m=+36.822748293 (delta=109.292923ms)
	I0802 18:36:53.367690   51349 fix.go:200] guest clock delta is within tolerance: 109.292923ms
	I0802 18:36:53.367695   51349 start.go:83] releasing machines lock for "pause-455569", held for 9.575807071s
	I0802 18:36:53.367715   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.367973   51349 main.go:141] libmachine: (pause-455569) Calling .GetIP
	I0802 18:36:53.371290   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.371672   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.371701   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.371823   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.372414   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.372642   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.372726   51349 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:36:53.372772   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.372847   51349 ssh_runner.go:195] Run: cat /version.json
	I0802 18:36:53.372869   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.375636   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.375853   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.376027   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.376055   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.376189   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.376279   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.376308   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.376345   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.376486   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.376531   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.376619   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.376721   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:53.376782   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.376901   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:53.468342   51349 ssh_runner.go:195] Run: systemctl --version
	I0802 18:36:53.496584   51349 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:36:53.679352   51349 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:36:53.689060   51349 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:36:53.689143   51349 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:36:53.711143   51349 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0802 18:36:53.711175   51349 start.go:495] detecting cgroup driver to use...
	I0802 18:36:53.711255   51349 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:36:53.744892   51349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:36:53.762781   51349 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:36:53.762845   51349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:36:53.789046   51349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:36:53.916327   51349 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:36:54.170102   51349 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:36:54.365200   51349 docker.go:233] disabling docker service ...
	I0802 18:36:54.365285   51349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:36:54.422189   51349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:36:54.457595   51349 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:36:54.713424   51349 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:36:55.135741   51349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:36:55.167916   51349 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:36:55.233366   51349 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 18:36:55.233436   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.260193   51349 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:36:55.260274   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.275616   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.298506   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.314564   51349 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:36:55.335754   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.358202   51349 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.375757   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.391753   51349 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:36:55.404660   51349 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:36:55.416981   51349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:36:55.675813   51349 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:37:05.974875   51349 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.29902423s)
	I0802 18:37:05.974915   51349 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:37:05.974973   51349 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:37:05.979907   51349 start.go:563] Will wait 60s for crictl version
	I0802 18:37:05.979952   51349 ssh_runner.go:195] Run: which crictl
	I0802 18:37:05.983635   51349 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:37:06.018370   51349 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:37:06.018446   51349 ssh_runner.go:195] Run: crio --version
	I0802 18:37:06.045659   51349 ssh_runner.go:195] Run: crio --version
	I0802 18:37:06.076497   51349 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 18:37:06.077558   51349 main.go:141] libmachine: (pause-455569) Calling .GetIP
	I0802 18:37:06.080529   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:37:06.080886   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:37:06.080908   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:37:06.081163   51349 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 18:37:06.085397   51349 kubeadm.go:883] updating cluster {Name:pause-455569 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-455569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:37:06.085545   51349 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:37:06.085616   51349 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:37:06.126311   51349 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:37:06.126333   51349 crio.go:433] Images already preloaded, skipping extraction
	I0802 18:37:06.126380   51349 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:37:06.163548   51349 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:37:06.163583   51349 cache_images.go:84] Images are preloaded, skipping loading
	I0802 18:37:06.163593   51349 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.30.3 crio true true} ...
	I0802 18:37:06.163744   51349 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-455569 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-455569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:37:06.163835   51349 ssh_runner.go:195] Run: crio config
	I0802 18:37:06.216364   51349 cni.go:84] Creating CNI manager for ""
	I0802 18:37:06.216384   51349 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:37:06.216394   51349 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:37:06.216413   51349 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-455569 NodeName:pause-455569 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 18:37:06.216531   51349 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-455569"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:37:06.216591   51349 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 18:37:06.226521   51349 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:37:06.226590   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:37:06.235989   51349 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0802 18:37:06.252699   51349 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:37:06.268779   51349 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0802 18:37:06.285022   51349 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0802 18:37:06.288803   51349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:37:06.424154   51349 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:37:06.439377   51349 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569 for IP: 192.168.39.26
	I0802 18:37:06.439402   51349 certs.go:194] generating shared ca certs ...
	I0802 18:37:06.439421   51349 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:37:06.439597   51349 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:37:06.439652   51349 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:37:06.439661   51349 certs.go:256] generating profile certs ...
	I0802 18:37:06.439745   51349 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/client.key
	I0802 18:37:06.439838   51349 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/apiserver.key.baed76b2
	I0802 18:37:06.439873   51349 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/proxy-client.key
	I0802 18:37:06.440019   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:37:06.440054   51349 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:37:06.440064   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:37:06.440087   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:37:06.440113   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:37:06.440130   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:37:06.440164   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:37:06.440694   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:37:06.465958   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:37:06.490272   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:37:06.512930   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:37:06.534670   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0802 18:37:06.557698   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 18:37:06.579821   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:37:06.601422   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 18:37:06.624191   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:37:06.646603   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:37:06.672574   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:37:06.695750   51349 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:37:06.711231   51349 ssh_runner.go:195] Run: openssl version
	I0802 18:37:06.716980   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:37:06.727800   51349 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:37:06.732170   51349 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:37:06.732226   51349 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:37:06.738156   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:37:06.747617   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:37:06.757803   51349 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:37:06.761886   51349 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:37:06.761937   51349 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:37:06.767668   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:37:06.776348   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:37:06.786787   51349 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:37:06.790824   51349 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:37:06.790873   51349 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:37:06.796473   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:37:06.806038   51349 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:37:06.810368   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 18:37:06.815765   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 18:37:06.821175   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 18:37:06.826527   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 18:37:06.831536   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 18:37:06.836568   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 18:37:06.841945   51349 kubeadm.go:392] StartCluster: {Name:pause-455569 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-455569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:37:06.842088   51349 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:37:06.842131   51349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:37:06.876875   51349 cri.go:89] found id: "bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d"
	I0802 18:37:06.876910   51349 cri.go:89] found id: "e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544"
	I0802 18:37:06.876918   51349 cri.go:89] found id: "3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8"
	I0802 18:37:06.876924   51349 cri.go:89] found id: "d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe"
	I0802 18:37:06.876929   51349 cri.go:89] found id: "c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7"
	I0802 18:37:06.876935   51349 cri.go:89] found id: "64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6"
	I0802 18:37:06.876939   51349 cri.go:89] found id: "cd4c6565542c91adb90cecb787b79f87939fdb0e03a0aa9dad1a1f778becdbc4"
	I0802 18:37:06.876944   51349 cri.go:89] found id: "51defafa540f57928366e7d3101908daa839051eb51c6250f5aefe9a4af1e3ee"
	I0802 18:37:06.876949   51349 cri.go:89] found id: "1457c2f2941eafeeaa86f8cf787a8da01a73f949da71a1a6ef8af37ac63ffd85"
	I0802 18:37:06.876958   51349 cri.go:89] found id: "b83d690b8c4f1408d97e336b93e91b91bf371aefc601b1793a7485e785665d18"
	I0802 18:37:06.876963   51349 cri.go:89] found id: "e5647b8714ff3460a485e6cdd00b03f7d8ff47b859819cb0aa43fca94682d24e"
	I0802 18:37:06.876967   51349 cri.go:89] found id: "56f59a67c271d9a0dc015537492509698838cb31b03a4e2b6de0c56b92bab8b2"
	I0802 18:37:06.876972   51349 cri.go:89] found id: ""
	I0802 18:37:06.877032   51349 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-455569 -n pause-455569
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-455569 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-455569 logs -n 25: (1.338848028s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:32 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-872961         | offline-crio-872961       | jenkins | v1.33.1 | 02 Aug 24 18:32 UTC | 02 Aug 24 18:34 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:32 UTC | 02 Aug 24 18:33 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-079131      | minikube                  | jenkins | v1.26.0 | 02 Aug 24 18:32 UTC | 02 Aug 24 18:34 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-919916    | force-systemd-env-919916  | jenkins | v1.33.1 | 02 Aug 24 18:32 UTC | 02 Aug 24 18:32 UTC |
	| start   | -p kubernetes-upgrade-132946   | kubernetes-upgrade-132946 | jenkins | v1.33.1 | 02 Aug 24 18:32 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:33 UTC | 02 Aug 24 18:34 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-872961         | offline-crio-872961       | jenkins | v1.33.1 | 02 Aug 24 18:34 UTC | 02 Aug 24 18:34 UTC |
	| start   | -p running-upgrade-079131      | running-upgrade-079131    | jenkins | v1.33.1 | 02 Aug 24 18:34 UTC | 02 Aug 24 18:35 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-837935      | minikube                  | jenkins | v1.26.0 | 02 Aug 24 18:34 UTC | 02 Aug 24 18:35 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:34 UTC | 02 Aug 24 18:34 UTC |
	| start   | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:34 UTC | 02 Aug 24 18:35 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-891799 sudo    | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-079131      | running-upgrade-079131    | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:35 UTC |
	| start   | -p pause-455569 --memory=2048  | pause-455569              | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:36 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:35 UTC |
	| start   | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:36 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-837935 stop    | minikube                  | jenkins | v1.26.0 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:35 UTC |
	| start   | -p stopped-upgrade-837935      | stopped-upgrade-837935    | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:36 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-891799 sudo    | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC | 02 Aug 24 18:36 UTC |
	| start   | -p cert-expiration-139745      | cert-expiration-139745    | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC | 02 Aug 24 18:37 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m           |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-455569                | pause-455569              | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC | 02 Aug 24 18:37 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-837935      | stopped-upgrade-837935    | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC | 02 Aug 24 18:36 UTC |
	| start   | -p force-systemd-flag-234725   | force-systemd-flag-234725 | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:36:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:36:44.828869   51814 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:36:44.829155   51814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:36:44.829167   51814 out.go:304] Setting ErrFile to fd 2...
	I0802 18:36:44.829173   51814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:36:44.829376   51814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:36:44.829977   51814 out.go:298] Setting JSON to false
	I0802 18:36:44.830962   51814 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4749,"bootTime":1722619056,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:36:44.831025   51814 start.go:139] virtualization: kvm guest
	I0802 18:36:44.835178   51814 out.go:177] * [force-systemd-flag-234725] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:36:44.839135   51814 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:36:44.839192   51814 notify.go:220] Checking for updates...
	I0802 18:36:44.841777   51814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:36:44.843051   51814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:36:44.844422   51814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:36:44.845716   51814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:36:44.847223   51814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:36:44.848935   51814 config.go:182] Loaded profile config "cert-expiration-139745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:36:44.849036   51814 config.go:182] Loaded profile config "kubernetes-upgrade-132946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0802 18:36:44.849156   51814 config.go:182] Loaded profile config "pause-455569": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:36:44.849237   51814 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:36:44.889361   51814 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 18:36:44.890612   51814 start.go:297] selected driver: kvm2
	I0802 18:36:44.890625   51814 start.go:901] validating driver "kvm2" against <nil>
	I0802 18:36:44.890639   51814 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:36:44.891657   51814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:36:44.891738   51814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:36:44.907998   51814 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:36:44.908058   51814 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 18:36:44.908266   51814 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 18:36:44.908287   51814 cni.go:84] Creating CNI manager for ""
	I0802 18:36:44.908295   51814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:36:44.908302   51814 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 18:36:44.908363   51814 start.go:340] cluster config:
	{Name:force-systemd-flag-234725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-234725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:36:44.908466   51814 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:36:44.910792   51814 out.go:177] * Starting "force-systemd-flag-234725" primary control-plane node in "force-systemd-flag-234725" cluster
	I0802 18:36:42.216656   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.217131   51259 main.go:141] libmachine: (cert-expiration-139745) Found IP for machine: 192.168.61.201
	I0802 18:36:42.217156   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has current primary IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.217165   51259 main.go:141] libmachine: (cert-expiration-139745) Reserving static IP address...
	I0802 18:36:42.217599   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | unable to find host DHCP lease matching {name: "cert-expiration-139745", mac: "52:54:00:ee:71:54", ip: "192.168.61.201"} in network mk-cert-expiration-139745
	I0802 18:36:42.292607   51259 main.go:141] libmachine: (cert-expiration-139745) Reserved static IP address: 192.168.61.201
	I0802 18:36:42.292626   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Getting to WaitForSSH function...
	I0802 18:36:42.292634   51259 main.go:141] libmachine: (cert-expiration-139745) Waiting for SSH to be available...
	I0802 18:36:42.295684   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.296125   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.296155   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.296296   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Using SSH client type: external
	I0802 18:36:42.296318   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa (-rw-------)
	I0802 18:36:42.296365   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 18:36:42.296373   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | About to run SSH command:
	I0802 18:36:42.296385   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | exit 0
	I0802 18:36:42.431559   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | SSH cmd err, output: <nil>: 
	I0802 18:36:42.431820   51259 main.go:141] libmachine: (cert-expiration-139745) KVM machine creation complete!
	I0802 18:36:42.432246   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetConfigRaw
	I0802 18:36:42.432807   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:42.433018   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:42.433212   51259 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 18:36:42.433223   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetState
	I0802 18:36:42.434691   51259 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 18:36:42.434699   51259 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 18:36:42.434704   51259 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 18:36:42.434709   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:42.437316   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.437717   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.437739   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.437916   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:42.438087   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.438222   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.438319   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:42.438450   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:42.438651   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:42.438657   51259 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 18:36:42.546991   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:36:42.547009   51259 main.go:141] libmachine: Detecting the provisioner...
	I0802 18:36:42.547018   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:42.550037   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.550440   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.550464   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.550620   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:42.550788   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.550998   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.551120   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:42.551292   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:42.551459   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:42.551464   51259 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 18:36:42.667784   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 18:36:42.667857   51259 main.go:141] libmachine: found compatible host: buildroot
	I0802 18:36:42.667863   51259 main.go:141] libmachine: Provisioning with buildroot...
	I0802 18:36:42.667869   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetMachineName
	I0802 18:36:42.668134   51259 buildroot.go:166] provisioning hostname "cert-expiration-139745"
	I0802 18:36:42.668170   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetMachineName
	I0802 18:36:42.668411   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:42.671425   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.671931   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.671962   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.672062   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:42.672251   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.672440   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.672618   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:42.672816   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:42.673013   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:42.673024   51259 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-139745 && echo "cert-expiration-139745" | sudo tee /etc/hostname
	I0802 18:36:42.801879   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-139745
	
	I0802 18:36:42.801901   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:42.805018   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.805386   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.805405   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.805644   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:42.805850   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.806046   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.806181   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:42.806348   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:42.806516   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:42.806527   51259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-139745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-139745/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-139745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:36:42.930482   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:36:42.930496   51259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:36:42.930543   51259 buildroot.go:174] setting up certificates
	I0802 18:36:42.930553   51259 provision.go:84] configureAuth start
	I0802 18:36:42.930562   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetMachineName
	I0802 18:36:42.930848   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetIP
	I0802 18:36:42.933608   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.934022   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.934050   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.934201   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:42.936523   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.936837   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.936852   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.936968   51259 provision.go:143] copyHostCerts
	I0802 18:36:42.937017   51259 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:36:42.937023   51259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:36:42.937084   51259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:36:42.937180   51259 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:36:42.937184   51259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:36:42.937204   51259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:36:42.937250   51259 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:36:42.937253   51259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:36:42.937269   51259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:36:42.937309   51259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-139745 san=[127.0.0.1 192.168.61.201 cert-expiration-139745 localhost minikube]
	I0802 18:36:43.082698   51259 provision.go:177] copyRemoteCerts
	I0802 18:36:43.082746   51259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:36:43.082768   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.085750   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.086185   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.086207   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.086440   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.086624   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.086773   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.086902   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:36:43.176478   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0802 18:36:43.201724   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:36:43.226106   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:36:43.250222   51259 provision.go:87] duration metric: took 319.658347ms to configureAuth
	I0802 18:36:43.250238   51259 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:36:43.250491   51259 config.go:182] Loaded profile config "cert-expiration-139745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:36:43.250570   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.253147   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.253424   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.253447   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.253614   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.253803   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.253967   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.254085   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.254259   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:43.254468   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:43.254478   51259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:36:43.530960   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:36:43.530977   51259 main.go:141] libmachine: Checking connection to Docker...
	I0802 18:36:43.530988   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetURL
	I0802 18:36:43.532582   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Using libvirt version 6000000
	I0802 18:36:43.535301   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.535678   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.535701   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.535906   51259 main.go:141] libmachine: Docker is up and running!
	I0802 18:36:43.535913   51259 main.go:141] libmachine: Reticulating splines...
	I0802 18:36:43.535917   51259 client.go:171] duration metric: took 23.811578526s to LocalClient.Create
	I0802 18:36:43.535938   51259 start.go:167] duration metric: took 23.811625469s to libmachine.API.Create "cert-expiration-139745"
	I0802 18:36:43.535946   51259 start.go:293] postStartSetup for "cert-expiration-139745" (driver="kvm2")
	I0802 18:36:43.535957   51259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:36:43.535984   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:43.536272   51259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:36:43.536293   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.538918   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.539361   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.539382   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.539556   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.539776   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.539965   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.540109   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:36:43.626284   51259 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:36:43.630319   51259 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:36:43.630333   51259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:36:43.630394   51259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:36:43.630487   51259 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:36:43.630589   51259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:36:43.640203   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:36:43.666008   51259 start.go:296] duration metric: took 130.051153ms for postStartSetup
	I0802 18:36:43.666041   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetConfigRaw
	I0802 18:36:43.666703   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetIP
	I0802 18:36:43.669620   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.670038   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.670061   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.670282   51259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/config.json ...
	I0802 18:36:43.670455   51259 start.go:128] duration metric: took 23.966825616s to createHost
	I0802 18:36:43.670473   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.672773   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.673112   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.673131   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.673290   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.673469   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.673648   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.673796   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.674008   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:43.674211   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:43.674223   51259 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 18:36:43.791731   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722623803.767450132
	
	I0802 18:36:43.791742   51259 fix.go:216] guest clock: 1722623803.767450132
	I0802 18:36:43.791762   51259 fix.go:229] Guest: 2024-08-02 18:36:43.767450132 +0000 UTC Remote: 2024-08-02 18:36:43.670461271 +0000 UTC m=+37.912934760 (delta=96.988861ms)
	I0802 18:36:43.791784   51259 fix.go:200] guest clock delta is within tolerance: 96.988861ms
	I0802 18:36:43.791789   51259 start.go:83] releasing machines lock for "cert-expiration-139745", held for 24.088280864s
	I0802 18:36:43.791813   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:43.792044   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetIP
	I0802 18:36:43.795278   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.795685   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.795703   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.795859   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:43.796445   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:43.796678   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:43.796783   51259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:36:43.796815   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.797159   51259 ssh_runner.go:195] Run: cat /version.json
	I0802 18:36:43.797176   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.800208   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.800827   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.800854   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.800890   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.801109   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.801255   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.801272   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.801313   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.801459   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.801544   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.801612   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:36:43.801686   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.801807   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.801925   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:36:43.914339   51259 ssh_runner.go:195] Run: systemctl --version
	I0802 18:36:43.920258   51259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:36:44.078958   51259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:36:44.084947   51259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:36:44.085004   51259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:36:44.101214   51259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 18:36:44.101230   51259 start.go:495] detecting cgroup driver to use...
	I0802 18:36:44.101315   51259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:36:44.117987   51259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:36:44.133020   51259 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:36:44.133055   51259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:36:44.146710   51259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:36:44.160424   51259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:36:44.283866   51259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:36:44.455896   51259 docker.go:233] disabling docker service ...
	I0802 18:36:44.455959   51259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:36:44.469269   51259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:36:44.481848   51259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:36:44.613758   51259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:36:44.732610   51259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:36:44.747693   51259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:36:44.770032   51259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 18:36:44.770077   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.780949   51259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:36:44.781021   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.794202   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.805412   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.816285   51259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:36:44.827637   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.838127   51259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.860811   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.871959   51259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:36:44.883805   51259 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 18:36:44.883854   51259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 18:36:44.897827   51259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:36:44.907023   51259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:36:45.030074   51259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:36:45.173080   51259 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:36:45.173144   51259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:36:45.177770   51259 start.go:563] Will wait 60s for crictl version
	I0802 18:36:45.177818   51259 ssh_runner.go:195] Run: which crictl
	I0802 18:36:45.181264   51259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:36:45.224053   51259 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:36:45.224125   51259 ssh_runner.go:195] Run: crio --version
	I0802 18:36:45.250127   51259 ssh_runner.go:195] Run: crio --version
	I0802 18:36:45.284964   51259 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 18:36:45.286218   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetIP
	I0802 18:36:45.289224   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:45.289765   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:45.289787   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:45.289967   51259 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0802 18:36:45.294143   51259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:36:45.306906   51259 kubeadm.go:883] updating cluster {Name:cert-expiration-139745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.3 ClusterName:cert-expiration-139745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:36:45.307013   51259 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:36:45.307083   51259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:36:45.343398   51259 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 18:36:45.343466   51259 ssh_runner.go:195] Run: which lz4
	I0802 18:36:45.347646   51259 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 18:36:45.351710   51259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 18:36:45.351733   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 18:36:43.817189   51349 machine.go:94] provisionDockerMachine start ...
	I0802 18:36:43.817207   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:43.817390   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:43.819966   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.820376   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:43.820416   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.820548   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:43.820711   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.820843   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.820985   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:43.821138   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:43.821320   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:43.821339   51349 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:36:43.939817   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-455569
	
	I0802 18:36:43.939849   51349 main.go:141] libmachine: (pause-455569) Calling .GetMachineName
	I0802 18:36:43.940132   51349 buildroot.go:166] provisioning hostname "pause-455569"
	I0802 18:36:43.940158   51349 main.go:141] libmachine: (pause-455569) Calling .GetMachineName
	I0802 18:36:43.940385   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:43.943989   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.944498   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:43.944536   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.944705   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:43.944923   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.945109   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.945269   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:43.945424   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:43.945757   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:43.945790   51349 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-455569 && echo "pause-455569" | sudo tee /etc/hostname
	I0802 18:36:44.073861   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-455569
	
	I0802 18:36:44.073897   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:44.535132   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.535596   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:44.535637   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.535814   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:44.536076   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:44.536283   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:44.536432   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:44.536674   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:44.536910   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:44.536936   51349 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-455569' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-455569/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-455569' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:36:44.656659   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:36:44.656695   51349 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:36:44.656719   51349 buildroot.go:174] setting up certificates
	I0802 18:36:44.656735   51349 provision.go:84] configureAuth start
	I0802 18:36:44.656748   51349 main.go:141] libmachine: (pause-455569) Calling .GetMachineName
	I0802 18:36:44.657033   51349 main.go:141] libmachine: (pause-455569) Calling .GetIP
	I0802 18:36:44.660513   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.660915   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:44.660942   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.661141   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:44.663667   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.664015   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:44.664039   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.664219   51349 provision.go:143] copyHostCerts
	I0802 18:36:44.664288   51349 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:36:44.664299   51349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:36:44.664354   51349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:36:44.664475   51349 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:36:44.664488   51349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:36:44.664525   51349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:36:44.664610   51349 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:36:44.664621   51349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:36:44.664685   51349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:36:44.664757   51349 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.pause-455569 san=[127.0.0.1 192.168.39.26 localhost minikube pause-455569]
	I0802 18:36:45.112605   51349 provision.go:177] copyRemoteCerts
	I0802 18:36:45.112666   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:36:45.112688   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:45.115426   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.115750   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:45.115785   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.115899   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:45.116100   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:45.116263   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:45.116420   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:45.209422   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:36:45.234782   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0802 18:36:45.258977   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:36:45.288678   51349 provision.go:87] duration metric: took 631.931145ms to configureAuth
	I0802 18:36:45.288704   51349 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:36:45.288886   51349 config.go:182] Loaded profile config "pause-455569": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:36:45.288961   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:45.291523   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.291819   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:45.291854   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.291998   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:45.292208   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:45.292366   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:45.292492   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:45.292625   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:45.292804   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:45.292820   51349 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:36:44.912092   51814 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:36:44.912141   51814 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 18:36:44.912151   51814 cache.go:56] Caching tarball of preloaded images
	I0802 18:36:44.912267   51814 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:36:44.912280   51814 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 18:36:44.912381   51814 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/force-systemd-flag-234725/config.json ...
	I0802 18:36:44.912399   51814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/force-systemd-flag-234725/config.json: {Name:mk07b892edc5389323866eae005bc07a79c213b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:44.912557   51814 start.go:360] acquireMachinesLock for force-systemd-flag-234725: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:36:46.580125   51259 crio.go:462] duration metric: took 1.232510516s to copy over tarball
	I0802 18:36:46.580208   51259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 18:36:48.723492   51259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.143258379s)
	I0802 18:36:48.723511   51259 crio.go:469] duration metric: took 2.143370531s to extract the tarball
	I0802 18:36:48.723516   51259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 18:36:48.760057   51259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:36:48.802937   51259 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:36:48.802948   51259 cache_images.go:84] Images are preloaded, skipping loading
	I0802 18:36:48.802954   51259 kubeadm.go:934] updating node { 192.168.61.201 8443 v1.30.3 crio true true} ...
	I0802 18:36:48.803050   51259 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-139745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:cert-expiration-139745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:36:48.803128   51259 ssh_runner.go:195] Run: crio config
	I0802 18:36:48.846580   51259 cni.go:84] Creating CNI manager for ""
	I0802 18:36:48.846590   51259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:36:48.846598   51259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:36:48.846618   51259 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.201 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-139745 NodeName:cert-expiration-139745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 18:36:48.846749   51259 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-139745"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:36:48.846803   51259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 18:36:48.856587   51259 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:36:48.856638   51259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:36:48.865982   51259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0802 18:36:48.882112   51259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:36:48.897398   51259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0802 18:36:48.912591   51259 ssh_runner.go:195] Run: grep 192.168.61.201	control-plane.minikube.internal$ /etc/hosts
	I0802 18:36:48.916254   51259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:36:48.927417   51259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:36:49.046569   51259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:36:49.062430   51259 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745 for IP: 192.168.61.201
	I0802 18:36:49.062441   51259 certs.go:194] generating shared ca certs ...
	I0802 18:36:49.062455   51259 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.062609   51259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:36:49.062640   51259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:36:49.062645   51259 certs.go:256] generating profile certs ...
	I0802 18:36:49.062704   51259 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.key
	I0802 18:36:49.062713   51259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.crt with IP's: []
	I0802 18:36:49.131035   51259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.crt ...
	I0802 18:36:49.131050   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.crt: {Name:mk5c59c893e49c375a6ab761487cc225357b6856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.131236   51259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.key ...
	I0802 18:36:49.131248   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.key: {Name:mk4bc96b4ef670bd7861b3301f8fe9239292008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.131333   51259 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key.c7569576
	I0802 18:36:49.131344   51259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt.c7569576 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.201]
	I0802 18:36:49.423749   51259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt.c7569576 ...
	I0802 18:36:49.423763   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt.c7569576: {Name:mk104969b431e32fe293bdddd469a9c7320e89c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.423927   51259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key.c7569576 ...
	I0802 18:36:49.423935   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key.c7569576: {Name:mk5ec80321e7ca5d5852d32bd06da5aae4c6d9a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.424008   51259 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt.c7569576 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt
	I0802 18:36:49.424091   51259 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key.c7569576 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key
	I0802 18:36:49.424143   51259 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.key
	I0802 18:36:49.424153   51259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.crt with IP's: []
	I0802 18:36:49.634369   51259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.crt ...
	I0802 18:36:49.634383   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.crt: {Name:mkff097185d860903576931ebf8c3bf55f706f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.634544   51259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.key ...
	I0802 18:36:49.634552   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.key: {Name:mkf3da247d369553d8bcddd98b03fc90c30bbd03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.634717   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:36:49.634745   51259 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:36:49.634751   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:36:49.634773   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:36:49.634816   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:36:49.634841   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:36:49.634876   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:36:49.635511   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:36:49.660029   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:36:49.683014   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:36:49.705765   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:36:49.728533   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0802 18:36:49.751687   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 18:36:49.773783   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:36:49.795517   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 18:36:49.817494   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:36:49.844351   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:36:49.869090   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:36:49.894507   51259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:36:49.911801   51259 ssh_runner.go:195] Run: openssl version
	I0802 18:36:49.917220   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:36:49.926933   51259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:36:49.931217   51259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:36:49.931264   51259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:36:49.936834   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:36:49.946769   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:36:49.957622   51259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:36:49.961885   51259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:36:49.961924   51259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:36:49.967384   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:36:49.978385   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:36:49.988541   51259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:36:49.992767   51259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:36:49.992809   51259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:36:49.998221   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:36:50.008243   51259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:36:50.011870   51259 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 18:36:50.011918   51259 kubeadm.go:392] StartCluster: {Name:cert-expiration-139745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.3 ClusterName:cert-expiration-139745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:36:50.012015   51259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:36:50.012065   51259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:36:50.054052   51259 cri.go:89] found id: ""
	I0802 18:36:50.054106   51259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 18:36:50.063861   51259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 18:36:50.072664   51259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:36:50.081708   51259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:36:50.081717   51259 kubeadm.go:157] found existing configuration files:
	
	I0802 18:36:50.081766   51259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:36:50.090912   51259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:36:50.090969   51259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:36:50.099947   51259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:36:50.108606   51259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:36:50.108656   51259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:36:50.117615   51259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:36:50.125856   51259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:36:50.125904   51259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:36:50.134422   51259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:36:50.143468   51259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:36:50.143535   51259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:36:50.152556   51259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 18:36:50.267472   51259 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 18:36:50.267584   51259 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:36:50.383085   51259 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:36:50.383222   51259 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:36:50.383365   51259 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:36:50.585989   51259 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:36:50.668116   51259 out.go:204]   - Generating certificates and keys ...
	I0802 18:36:50.668246   51259 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:36:50.668302   51259 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:36:50.762364   51259 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 18:36:50.848904   51259 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 18:36:51.034812   51259 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 18:36:51.140679   51259 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 18:36:51.194923   51259 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 18:36:51.195194   51259 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-139745 localhost] and IPs [192.168.61.201 127.0.0.1 ::1]
	I0802 18:36:51.303096   51259 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 18:36:51.303281   51259 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-139745 localhost] and IPs [192.168.61.201 127.0.0.1 ::1]
	I0802 18:36:51.624757   51259 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 18:36:52.056614   51259 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 18:36:52.224864   51259 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 18:36:52.225112   51259 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:36:52.289493   51259 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:36:52.514021   51259 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 18:36:52.635689   51259 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:36:53.026581   51259 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:36:53.084208   51259 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:36:53.084999   51259 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:36:53.090555   51259 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:36:53.367799   51814 start.go:364] duration metric: took 8.455208995s to acquireMachinesLock for "force-systemd-flag-234725"
	I0802 18:36:53.367901   51814 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-234725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-234725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 18:36:53.368033   51814 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 18:36:53.369983   51814 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0802 18:36:53.370201   51814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:36:53.370267   51814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:36:53.386836   51814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0802 18:36:53.387256   51814 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:36:53.387792   51814 main.go:141] libmachine: Using API Version  1
	I0802 18:36:53.387812   51814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:36:53.388190   51814 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:36:53.388389   51814 main.go:141] libmachine: (force-systemd-flag-234725) Calling .GetMachineName
	I0802 18:36:53.388522   51814 main.go:141] libmachine: (force-systemd-flag-234725) Calling .DriverName
	I0802 18:36:53.388655   51814 start.go:159] libmachine.API.Create for "force-systemd-flag-234725" (driver="kvm2")
	I0802 18:36:53.388685   51814 client.go:168] LocalClient.Create starting
	I0802 18:36:53.388715   51814 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 18:36:53.388751   51814 main.go:141] libmachine: Decoding PEM data...
	I0802 18:36:53.388770   51814 main.go:141] libmachine: Parsing certificate...
	I0802 18:36:53.388848   51814 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 18:36:53.388873   51814 main.go:141] libmachine: Decoding PEM data...
	I0802 18:36:53.388890   51814 main.go:141] libmachine: Parsing certificate...
	I0802 18:36:53.388912   51814 main.go:141] libmachine: Running pre-create checks...
	I0802 18:36:53.388935   51814 main.go:141] libmachine: (force-systemd-flag-234725) Calling .PreCreateCheck
	I0802 18:36:53.389302   51814 main.go:141] libmachine: (force-systemd-flag-234725) Calling .GetConfigRaw
	I0802 18:36:53.389750   51814 main.go:141] libmachine: Creating machine...
	I0802 18:36:53.389766   51814 main.go:141] libmachine: (force-systemd-flag-234725) Calling .Create
	I0802 18:36:53.389874   51814 main.go:141] libmachine: (force-systemd-flag-234725) Creating KVM machine...
	I0802 18:36:53.391174   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | found existing default KVM network
	I0802 18:36:53.392551   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:53.392409   51870 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ac:35:15} reservation:<nil>}
	I0802 18:36:53.393933   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:53.393857   51870 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a4850}
	I0802 18:36:53.393963   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | created network xml: 
	I0802 18:36:53.393979   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | <network>
	I0802 18:36:53.393988   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   <name>mk-force-systemd-flag-234725</name>
	I0802 18:36:53.394015   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   <dns enable='no'/>
	I0802 18:36:53.394039   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   
	I0802 18:36:53.394053   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0802 18:36:53.394068   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |     <dhcp>
	I0802 18:36:53.394094   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0802 18:36:53.394132   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |     </dhcp>
	I0802 18:36:53.394146   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   </ip>
	I0802 18:36:53.394156   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   
	I0802 18:36:53.394165   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | </network>
	I0802 18:36:53.394181   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | 
	I0802 18:36:53.399578   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | trying to create private KVM network mk-force-systemd-flag-234725 192.168.50.0/24...
	I0802 18:36:53.470300   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | private KVM network mk-force-systemd-flag-234725 192.168.50.0/24 created
	I0802 18:36:53.470333   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725 ...
	I0802 18:36:53.470359   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:53.470268   51870 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:36:53.470377   51814 main.go:141] libmachine: (force-systemd-flag-234725) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 18:36:53.470469   51814 main.go:141] libmachine: (force-systemd-flag-234725) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 18:36:53.712114   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:53.711996   51870 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725/id_rsa...
	I0802 18:36:54.126457   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:54.126305   51870 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725/force-systemd-flag-234725.rawdisk...
	I0802 18:36:54.126515   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Writing magic tar header
	I0802 18:36:54.126535   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Writing SSH key tar header
	I0802 18:36:54.126549   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:54.126451   51870 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725 ...
	I0802 18:36:54.126566   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725
	I0802 18:36:54.126630   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725 (perms=drwx------)
	I0802 18:36:54.126715   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 18:36:54.126739   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 18:36:54.126749   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 18:36:54.126760   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:36:54.126777   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 18:36:54.126786   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 18:36:54.126801   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins
	I0802 18:36:54.126817   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home
	I0802 18:36:54.126830   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Skipping /home - not owner
	I0802 18:36:54.126841   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 18:36:54.126854   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 18:36:54.126863   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 18:36:54.126873   51814 main.go:141] libmachine: (force-systemd-flag-234725) Creating domain...
	I0802 18:36:54.128120   51814 main.go:141] libmachine: (force-systemd-flag-234725) define libvirt domain using xml: 
	I0802 18:36:54.128146   51814 main.go:141] libmachine: (force-systemd-flag-234725) <domain type='kvm'>
	I0802 18:36:54.128159   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <name>force-systemd-flag-234725</name>
	I0802 18:36:54.128169   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <memory unit='MiB'>2048</memory>
	I0802 18:36:54.128182   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <vcpu>2</vcpu>
	I0802 18:36:54.128189   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <features>
	I0802 18:36:54.128197   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <acpi/>
	I0802 18:36:54.128205   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <apic/>
	I0802 18:36:54.128221   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <pae/>
	I0802 18:36:54.128238   51814 main.go:141] libmachine: (force-systemd-flag-234725)     
	I0802 18:36:54.128252   51814 main.go:141] libmachine: (force-systemd-flag-234725)   </features>
	I0802 18:36:54.128263   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <cpu mode='host-passthrough'>
	I0802 18:36:54.128273   51814 main.go:141] libmachine: (force-systemd-flag-234725)   
	I0802 18:36:54.128283   51814 main.go:141] libmachine: (force-systemd-flag-234725)   </cpu>
	I0802 18:36:54.128294   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <os>
	I0802 18:36:54.128305   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <type>hvm</type>
	I0802 18:36:54.128320   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <boot dev='cdrom'/>
	I0802 18:36:54.128350   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <boot dev='hd'/>
	I0802 18:36:54.128361   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <bootmenu enable='no'/>
	I0802 18:36:54.128368   51814 main.go:141] libmachine: (force-systemd-flag-234725)   </os>
	I0802 18:36:54.128376   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <devices>
	I0802 18:36:54.128383   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <disk type='file' device='cdrom'>
	I0802 18:36:54.128401   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725/boot2docker.iso'/>
	I0802 18:36:54.128409   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <target dev='hdc' bus='scsi'/>
	I0802 18:36:54.128415   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <readonly/>
	I0802 18:36:54.128423   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </disk>
	I0802 18:36:54.128429   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <disk type='file' device='disk'>
	I0802 18:36:54.128435   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 18:36:54.128446   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725/force-systemd-flag-234725.rawdisk'/>
	I0802 18:36:54.128451   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <target dev='hda' bus='virtio'/>
	I0802 18:36:54.128456   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </disk>
	I0802 18:36:54.128461   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <interface type='network'>
	I0802 18:36:54.128467   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <source network='mk-force-systemd-flag-234725'/>
	I0802 18:36:54.128477   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <model type='virtio'/>
	I0802 18:36:54.128482   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </interface>
	I0802 18:36:54.128487   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <interface type='network'>
	I0802 18:36:54.128519   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <source network='default'/>
	I0802 18:36:54.128547   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <model type='virtio'/>
	I0802 18:36:54.128565   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </interface>
	I0802 18:36:54.128586   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <serial type='pty'>
	I0802 18:36:54.128596   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <target port='0'/>
	I0802 18:36:54.128607   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </serial>
	I0802 18:36:54.128616   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <console type='pty'>
	I0802 18:36:54.128627   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <target type='serial' port='0'/>
	I0802 18:36:54.128636   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </console>
	I0802 18:36:54.128646   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <rng model='virtio'>
	I0802 18:36:54.128655   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <backend model='random'>/dev/random</backend>
	I0802 18:36:54.128665   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </rng>
	I0802 18:36:54.128674   51814 main.go:141] libmachine: (force-systemd-flag-234725)     
	I0802 18:36:54.128686   51814 main.go:141] libmachine: (force-systemd-flag-234725)     
	I0802 18:36:54.128695   51814 main.go:141] libmachine: (force-systemd-flag-234725)   </devices>
	I0802 18:36:54.128707   51814 main.go:141] libmachine: (force-systemd-flag-234725) </domain>
	I0802 18:36:54.128718   51814 main.go:141] libmachine: (force-systemd-flag-234725) 
	I0802 18:36:54.134006   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:14:d3:d3 in network default
	I0802 18:36:54.134707   51814 main.go:141] libmachine: (force-systemd-flag-234725) Ensuring networks are active...
	I0802 18:36:54.134763   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:54.135665   51814 main.go:141] libmachine: (force-systemd-flag-234725) Ensuring network default is active
	I0802 18:36:54.136083   51814 main.go:141] libmachine: (force-systemd-flag-234725) Ensuring network mk-force-systemd-flag-234725 is active
	I0802 18:36:54.136786   51814 main.go:141] libmachine: (force-systemd-flag-234725) Getting domain xml...
	I0802 18:36:54.137681   51814 main.go:141] libmachine: (force-systemd-flag-234725) Creating domain...
	I0802 18:36:53.092163   51259 out.go:204]   - Booting up control plane ...
	I0802 18:36:53.092286   51259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:36:53.092375   51259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:36:53.093044   51259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:36:53.113780   51259 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:36:53.114472   51259 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:36:53.114521   51259 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:36:53.263848   51259 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 18:36:53.263937   51259 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 18:36:53.765360   51259 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.772894ms
	I0802 18:36:53.765493   51259 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 18:36:53.116134   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:36:53.116161   51349 machine.go:97] duration metric: took 9.298960301s to provisionDockerMachine
	I0802 18:36:53.116175   51349 start.go:293] postStartSetup for "pause-455569" (driver="kvm2")
	I0802 18:36:53.116189   51349 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:36:53.116209   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.116697   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:36:53.116735   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.120256   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.120750   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.120785   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.120988   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.121169   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.121333   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.121531   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:53.213159   51349 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:36:53.217372   51349 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:36:53.217398   51349 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:36:53.217466   51349 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:36:53.217586   51349 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:36:53.217733   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:36:53.226713   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:36:53.253754   51349 start.go:296] duration metric: took 137.564126ms for postStartSetup
	I0802 18:36:53.253799   51349 fix.go:56] duration metric: took 9.461883705s for fixHost
	I0802 18:36:53.253823   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.256858   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.257245   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.257275   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.257499   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.257745   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.257961   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.258127   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.258342   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:53.258577   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:53.258593   51349 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 18:36:53.367640   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722623813.363097160
	
	I0802 18:36:53.367663   51349 fix.go:216] guest clock: 1722623813.363097160
	I0802 18:36:53.367670   51349 fix.go:229] Guest: 2024-08-02 18:36:53.36309716 +0000 UTC Remote: 2024-08-02 18:36:53.253804237 +0000 UTC m=+36.822748293 (delta=109.292923ms)
	I0802 18:36:53.367690   51349 fix.go:200] guest clock delta is within tolerance: 109.292923ms
	I0802 18:36:53.367695   51349 start.go:83] releasing machines lock for "pause-455569", held for 9.575807071s
	I0802 18:36:53.367715   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.367973   51349 main.go:141] libmachine: (pause-455569) Calling .GetIP
	I0802 18:36:53.371290   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.371672   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.371701   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.371823   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.372414   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.372642   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.372726   51349 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:36:53.372772   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.372847   51349 ssh_runner.go:195] Run: cat /version.json
	I0802 18:36:53.372869   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.375636   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.375853   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.376027   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.376055   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.376189   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.376279   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.376308   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.376345   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.376486   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.376531   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.376619   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.376721   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:53.376782   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.376901   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:53.468342   51349 ssh_runner.go:195] Run: systemctl --version
	I0802 18:36:53.496584   51349 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:36:53.679352   51349 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:36:53.689060   51349 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:36:53.689143   51349 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:36:53.711143   51349 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0802 18:36:53.711175   51349 start.go:495] detecting cgroup driver to use...
	I0802 18:36:53.711255   51349 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:36:53.744892   51349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:36:53.762781   51349 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:36:53.762845   51349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:36:53.789046   51349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:36:53.916327   51349 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:36:54.170102   51349 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:36:54.365200   51349 docker.go:233] disabling docker service ...
	I0802 18:36:54.365285   51349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:36:54.422189   51349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:36:54.457595   51349 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:36:54.713424   51349 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:36:55.135741   51349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:36:55.167916   51349 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:36:55.233366   51349 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 18:36:55.233436   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.260193   51349 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:36:55.260274   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.275616   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.298506   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.314564   51349 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:36:55.335754   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.358202   51349 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.375757   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.391753   51349 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:36:55.404660   51349 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:36:55.416981   51349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:36:55.675813   51349 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:36:59.266390   51259 kubeadm.go:310] [api-check] The API server is healthy after 5.502329671s
	I0802 18:36:59.280158   51259 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 18:36:59.298888   51259 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 18:36:59.336624   51259 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 18:36:59.336882   51259 kubeadm.go:310] [mark-control-plane] Marking the node cert-expiration-139745 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 18:36:59.351771   51259 kubeadm.go:310] [bootstrap-token] Using token: e2ysyn.f25ty5cly7qgtp0x
	I0802 18:36:55.447526   51814 main.go:141] libmachine: (force-systemd-flag-234725) Waiting to get IP...
	I0802 18:36:55.448438   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:55.448884   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:55.448924   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:55.448867   51870 retry.go:31] will retry after 256.074668ms: waiting for machine to come up
	I0802 18:36:55.706466   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:55.707142   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:55.707171   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:55.707051   51870 retry.go:31] will retry after 249.772964ms: waiting for machine to come up
	I0802 18:36:55.958640   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:55.959129   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:55.959162   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:55.959057   51870 retry.go:31] will retry after 397.047934ms: waiting for machine to come up
	I0802 18:36:56.357642   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:56.358143   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:56.358176   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:56.358080   51870 retry.go:31] will retry after 527.244851ms: waiting for machine to come up
	I0802 18:36:56.886666   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:56.887129   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:56.887158   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:56.887080   51870 retry.go:31] will retry after 681.858186ms: waiting for machine to come up
	I0802 18:36:57.570375   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:57.570911   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:57.570940   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:57.570862   51870 retry.go:31] will retry after 701.988959ms: waiting for machine to come up
	I0802 18:36:58.274839   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:58.275360   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:58.275391   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:58.275266   51870 retry.go:31] will retry after 1.087546581s: waiting for machine to come up
	I0802 18:36:59.363944   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:59.364433   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:59.364463   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:59.364390   51870 retry.go:31] will retry after 907.645437ms: waiting for machine to come up
	I0802 18:36:59.354099   51259 out.go:204]   - Configuring RBAC rules ...
	I0802 18:36:59.354249   51259 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 18:36:59.367765   51259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 18:36:59.376862   51259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 18:36:59.383282   51259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 18:36:59.390269   51259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 18:36:59.395248   51259 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 18:36:59.671829   51259 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 18:37:00.116739   51259 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 18:37:00.674056   51259 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 18:37:00.675555   51259 kubeadm.go:310] 
	I0802 18:37:00.675641   51259 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 18:37:00.675645   51259 kubeadm.go:310] 
	I0802 18:37:00.675726   51259 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 18:37:00.675730   51259 kubeadm.go:310] 
	I0802 18:37:00.675749   51259 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 18:37:00.675800   51259 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 18:37:00.675840   51259 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 18:37:00.675843   51259 kubeadm.go:310] 
	I0802 18:37:00.675884   51259 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 18:37:00.675887   51259 kubeadm.go:310] 
	I0802 18:37:00.675958   51259 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 18:37:00.675965   51259 kubeadm.go:310] 
	I0802 18:37:00.676017   51259 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 18:37:00.676125   51259 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 18:37:00.676222   51259 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 18:37:00.676230   51259 kubeadm.go:310] 
	I0802 18:37:00.676312   51259 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 18:37:00.676393   51259 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 18:37:00.676398   51259 kubeadm.go:310] 
	I0802 18:37:00.676504   51259 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e2ysyn.f25ty5cly7qgtp0x \
	I0802 18:37:00.676631   51259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 18:37:00.676660   51259 kubeadm.go:310] 	--control-plane 
	I0802 18:37:00.676665   51259 kubeadm.go:310] 
	I0802 18:37:00.676782   51259 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 18:37:00.676804   51259 kubeadm.go:310] 
	I0802 18:37:00.676893   51259 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e2ysyn.f25ty5cly7qgtp0x \
	I0802 18:37:00.677011   51259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 18:37:00.677342   51259 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 18:37:00.677360   51259 cni.go:84] Creating CNI manager for ""
	I0802 18:37:00.677368   51259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:37:00.679214   51259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 18:37:00.680698   51259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 18:37:00.691896   51259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 18:37:00.714792   51259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 18:37:00.714854   51259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 18:37:00.714908   51259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-139745 minikube.k8s.io/updated_at=2024_08_02T18_37_00_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=cert-expiration-139745 minikube.k8s.io/primary=true
	I0802 18:37:00.728735   51259 ops.go:34] apiserver oom_adj: -16
	I0802 18:37:00.889036   51259 kubeadm.go:1113] duration metric: took 174.252113ms to wait for elevateKubeSystemPrivileges
	I0802 18:37:00.910885   51259 kubeadm.go:394] duration metric: took 10.898962331s to StartCluster
	I0802 18:37:00.910919   51259 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:37:00.911007   51259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:37:00.912552   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:37:00.912806   51259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 18:37:00.912832   51259 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 18:37:00.912897   51259 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 18:37:00.912948   51259 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-139745"
	I0802 18:37:00.912965   51259 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-139745"
	I0802 18:37:00.912978   51259 addons.go:234] Setting addon storage-provisioner=true in "cert-expiration-139745"
	I0802 18:37:00.913011   51259 host.go:66] Checking if "cert-expiration-139745" exists ...
	I0802 18:37:00.913021   51259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-139745"
	I0802 18:37:00.913034   51259 config.go:182] Loaded profile config "cert-expiration-139745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:37:00.913493   51259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:37:00.913509   51259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:37:00.913532   51259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:37:00.913612   51259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:37:00.914344   51259 out.go:177] * Verifying Kubernetes components...
	I0802 18:37:00.915787   51259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:37:00.929261   51259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0802 18:37:00.929685   51259 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:37:00.930225   51259 main.go:141] libmachine: Using API Version  1
	I0802 18:37:00.930244   51259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:37:00.930584   51259 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:37:00.930827   51259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0802 18:37:00.931205   51259 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:37:00.931200   51259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:37:00.931236   51259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:37:00.931669   51259 main.go:141] libmachine: Using API Version  1
	I0802 18:37:00.931686   51259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:37:00.932012   51259 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:37:00.932245   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetState
	I0802 18:37:00.935351   51259 addons.go:234] Setting addon default-storageclass=true in "cert-expiration-139745"
	I0802 18:37:00.935374   51259 host.go:66] Checking if "cert-expiration-139745" exists ...
	I0802 18:37:00.935630   51259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:37:00.935655   51259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:37:00.946174   51259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
	I0802 18:37:00.946670   51259 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:37:00.947096   51259 main.go:141] libmachine: Using API Version  1
	I0802 18:37:00.947130   51259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:37:00.947663   51259 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:37:00.947853   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetState
	I0802 18:37:00.949599   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:37:00.951263   51259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46773
	I0802 18:37:00.951344   51259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:36:57.976373   48425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:36:57.976489   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:36:57.976733   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:37:00.951693   51259 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:37:00.952211   51259 main.go:141] libmachine: Using API Version  1
	I0802 18:37:00.952228   51259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:37:00.952542   51259 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:37:00.952679   51259 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 18:37:00.952688   51259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 18:37:00.952704   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:37:00.953122   51259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:37:00.953152   51259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:37:00.955744   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:37:00.956138   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:37:00.956150   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:37:00.956305   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:37:00.956476   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:37:00.956626   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:37:00.956767   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:37:00.969135   51259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41913
	I0802 18:37:00.969469   51259 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:37:00.969927   51259 main.go:141] libmachine: Using API Version  1
	I0802 18:37:00.969937   51259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:37:00.970233   51259 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:37:00.970393   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetState
	I0802 18:37:00.972266   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:37:00.972464   51259 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 18:37:00.972471   51259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 18:37:00.972483   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:37:00.975236   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:37:00.975637   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:37:00.975667   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:37:00.975856   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:37:00.976056   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:37:00.976198   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:37:00.976321   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:37:01.170401   51259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:37:01.170443   51259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 18:37:01.218246   51259 api_server.go:52] waiting for apiserver process to appear ...
	I0802 18:37:01.218292   51259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:37:01.262617   51259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 18:37:01.291830   51259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 18:37:01.568357   51259 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0802 18:37:01.568404   51259 api_server.go:72] duration metric: took 655.54224ms to wait for apiserver process to appear ...
	I0802 18:37:01.568419   51259 api_server.go:88] waiting for apiserver healthz status ...
	I0802 18:37:01.568438   51259 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0802 18:37:01.568467   51259 main.go:141] libmachine: Making call to close driver server
	I0802 18:37:01.568479   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .Close
	I0802 18:37:01.568794   51259 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:37:01.568816   51259 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:37:01.568818   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Closing plugin on server side
	I0802 18:37:01.568823   51259 main.go:141] libmachine: Making call to close driver server
	I0802 18:37:01.568831   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .Close
	I0802 18:37:01.569067   51259 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:37:01.569075   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Closing plugin on server side
	I0802 18:37:01.569084   51259 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:37:01.578878   51259 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0802 18:37:01.584458   51259 api_server.go:141] control plane version: v1.30.3
	I0802 18:37:01.584473   51259 api_server.go:131] duration metric: took 16.048964ms to wait for apiserver health ...
	I0802 18:37:01.584480   51259 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 18:37:01.592685   51259 main.go:141] libmachine: Making call to close driver server
	I0802 18:37:01.592699   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .Close
	I0802 18:37:01.593089   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Closing plugin on server side
	I0802 18:37:01.593107   51259 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:37:01.593116   51259 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:37:01.593705   51259 system_pods.go:59] 4 kube-system pods found
	I0802 18:37:01.593721   51259 system_pods.go:61] "etcd-cert-expiration-139745" [dde8f282-e341-48cb-9897-069e1c320ecb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0802 18:37:01.593728   51259 system_pods.go:61] "kube-apiserver-cert-expiration-139745" [744503b5-e26d-4bda-9636-dfdedbc526b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0802 18:37:01.593733   51259 system_pods.go:61] "kube-controller-manager-cert-expiration-139745" [639bab33-005b-4205-a54a-7a4e0ff3f1c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0802 18:37:01.593739   51259 system_pods.go:61] "kube-scheduler-cert-expiration-139745" [80f8cbdd-b8b3-4fb6-b8f7-8543165e7fd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0802 18:37:01.593744   51259 system_pods.go:74] duration metric: took 9.259244ms to wait for pod list to return data ...
	I0802 18:37:01.593751   51259 kubeadm.go:582] duration metric: took 680.897267ms to wait for: map[apiserver:true system_pods:true]
	I0802 18:37:01.593761   51259 node_conditions.go:102] verifying NodePressure condition ...
	I0802 18:37:01.598810   51259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 18:37:01.598827   51259 node_conditions.go:123] node cpu capacity is 2
	I0802 18:37:01.598837   51259 node_conditions.go:105] duration metric: took 5.071367ms to run NodePressure ...
	I0802 18:37:01.598850   51259 start.go:241] waiting for startup goroutines ...
	I0802 18:37:01.771846   51259 main.go:141] libmachine: Making call to close driver server
	I0802 18:37:01.771862   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .Close
	I0802 18:37:01.772136   51259 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:37:01.772148   51259 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:37:01.772158   51259 main.go:141] libmachine: Making call to close driver server
	I0802 18:37:01.772166   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .Close
	I0802 18:37:01.772389   51259 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:37:01.772400   51259 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:37:01.773978   51259 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0802 18:37:01.775067   51259 addons.go:510] duration metric: took 862.167199ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0802 18:37:02.072855   51259 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-139745" context rescaled to 1 replicas
	I0802 18:37:02.072881   51259 start.go:246] waiting for cluster config update ...
	I0802 18:37:02.072890   51259 start.go:255] writing updated cluster config ...
	I0802 18:37:02.073136   51259 ssh_runner.go:195] Run: rm -f paused
	I0802 18:37:02.118824   51259 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 18:37:02.120763   51259 out.go:177] * Done! kubectl is now configured to use "cert-expiration-139745" cluster and "default" namespace by default
	I0802 18:37:00.273413   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:37:00.273843   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:37:00.273866   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:37:00.273797   51870 retry.go:31] will retry after 1.200432562s: waiting for machine to come up
	I0802 18:37:01.476140   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:37:01.476617   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:37:01.476646   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:37:01.476562   51870 retry.go:31] will retry after 2.291414721s: waiting for machine to come up
	I0802 18:37:03.769330   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:37:03.769860   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:37:03.769888   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:37:03.769797   51870 retry.go:31] will retry after 2.203601404s: waiting for machine to come up
	I0802 18:37:05.974875   51349 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.29902423s)
	I0802 18:37:05.974915   51349 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:37:05.974973   51349 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:37:05.979907   51349 start.go:563] Will wait 60s for crictl version
	I0802 18:37:05.979952   51349 ssh_runner.go:195] Run: which crictl
	I0802 18:37:05.983635   51349 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:37:06.018370   51349 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:37:06.018446   51349 ssh_runner.go:195] Run: crio --version
	I0802 18:37:06.045659   51349 ssh_runner.go:195] Run: crio --version
	I0802 18:37:06.076497   51349 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 18:37:06.077558   51349 main.go:141] libmachine: (pause-455569) Calling .GetIP
	I0802 18:37:06.080529   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:37:06.080886   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:37:06.080908   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:37:06.081163   51349 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 18:37:06.085397   51349 kubeadm.go:883] updating cluster {Name:pause-455569 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-455569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:37:06.085545   51349 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:37:06.085616   51349 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:37:06.126311   51349 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:37:06.126333   51349 crio.go:433] Images already preloaded, skipping extraction
	I0802 18:37:06.126380   51349 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:37:06.163548   51349 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:37:06.163583   51349 cache_images.go:84] Images are preloaded, skipping loading
	I0802 18:37:06.163593   51349 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.30.3 crio true true} ...
	I0802 18:37:06.163744   51349 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-455569 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-455569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:37:06.163835   51349 ssh_runner.go:195] Run: crio config
	I0802 18:37:06.216364   51349 cni.go:84] Creating CNI manager for ""
	I0802 18:37:06.216384   51349 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:37:06.216394   51349 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:37:06.216413   51349 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-455569 NodeName:pause-455569 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 18:37:06.216531   51349 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-455569"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:37:06.216591   51349 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 18:37:06.226521   51349 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:37:06.226590   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:37:06.235989   51349 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0802 18:37:06.252699   51349 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:37:06.268779   51349 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0802 18:37:06.285022   51349 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0802 18:37:06.288803   51349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:37:06.424154   51349 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:37:06.439377   51349 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569 for IP: 192.168.39.26
	I0802 18:37:06.439402   51349 certs.go:194] generating shared ca certs ...
	I0802 18:37:06.439421   51349 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:37:06.439597   51349 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:37:06.439652   51349 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:37:06.439661   51349 certs.go:256] generating profile certs ...
	I0802 18:37:06.439745   51349 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/client.key
	I0802 18:37:06.439838   51349 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/apiserver.key.baed76b2
	I0802 18:37:06.439873   51349 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/proxy-client.key
	I0802 18:37:06.440019   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:37:06.440054   51349 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:37:06.440064   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:37:06.440087   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:37:06.440113   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:37:06.440130   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:37:06.440164   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:37:06.440694   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:37:06.465958   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:37:02.977213   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:37:02.977527   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:37:05.974988   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:37:05.975554   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:37:05.975574   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:37:05.975515   51870 retry.go:31] will retry after 2.769051441s: waiting for machine to come up
	I0802 18:37:08.745890   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:37:08.746372   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:37:08.746405   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:37:08.746341   51870 retry.go:31] will retry after 2.778647468s: waiting for machine to come up
	I0802 18:37:06.490272   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:37:06.512930   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:37:06.534670   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0802 18:37:06.557698   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 18:37:06.579821   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:37:06.601422   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 18:37:06.624191   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:37:06.646603   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:37:06.672574   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:37:06.695750   51349 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:37:06.711231   51349 ssh_runner.go:195] Run: openssl version
	I0802 18:37:06.716980   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:37:06.727800   51349 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:37:06.732170   51349 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:37:06.732226   51349 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:37:06.738156   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:37:06.747617   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:37:06.757803   51349 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:37:06.761886   51349 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:37:06.761937   51349 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:37:06.767668   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:37:06.776348   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:37:06.786787   51349 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:37:06.790824   51349 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:37:06.790873   51349 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:37:06.796473   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:37:06.806038   51349 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:37:06.810368   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 18:37:06.815765   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 18:37:06.821175   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 18:37:06.826527   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 18:37:06.831536   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 18:37:06.836568   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 18:37:06.841945   51349 kubeadm.go:392] StartCluster: {Name:pause-455569 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-455569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:37:06.842088   51349 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:37:06.842131   51349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:37:06.876875   51349 cri.go:89] found id: "bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d"
	I0802 18:37:06.876910   51349 cri.go:89] found id: "e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544"
	I0802 18:37:06.876918   51349 cri.go:89] found id: "3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8"
	I0802 18:37:06.876924   51349 cri.go:89] found id: "d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe"
	I0802 18:37:06.876929   51349 cri.go:89] found id: "c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7"
	I0802 18:37:06.876935   51349 cri.go:89] found id: "64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6"
	I0802 18:37:06.876939   51349 cri.go:89] found id: "cd4c6565542c91adb90cecb787b79f87939fdb0e03a0aa9dad1a1f778becdbc4"
	I0802 18:37:06.876944   51349 cri.go:89] found id: "51defafa540f57928366e7d3101908daa839051eb51c6250f5aefe9a4af1e3ee"
	I0802 18:37:06.876949   51349 cri.go:89] found id: "1457c2f2941eafeeaa86f8cf787a8da01a73f949da71a1a6ef8af37ac63ffd85"
	I0802 18:37:06.876958   51349 cri.go:89] found id: "b83d690b8c4f1408d97e336b93e91b91bf371aefc601b1793a7485e785665d18"
	I0802 18:37:06.876963   51349 cri.go:89] found id: "e5647b8714ff3460a485e6cdd00b03f7d8ff47b859819cb0aa43fca94682d24e"
	I0802 18:37:06.876967   51349 cri.go:89] found id: "56f59a67c271d9a0dc015537492509698838cb31b03a4e2b6de0c56b92bab8b2"
	I0802 18:37:06.876972   51349 cri.go:89] found id: ""
	I0802 18:37:06.877032   51349 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.836844023Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3218d18-e56e-41ef-b166-4bd024f2d1e8 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.838391150Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=595abb40-97c1-4e20-bfda-c450c669a2c5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.838868173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623850838842616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=595abb40-97c1-4e20-bfda-c450c669a2c5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.839566681Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54d49774-6efd-40e3-b3d3-8ec62a37aae4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.839656359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54d49774-6efd-40e3-b3d3-8ec62a37aae4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.839923702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc89b19747e3f8b9e24997bb871f8597d77a5e02ad1d578a4eaacda2e00c9fb1,PodSandboxId:ac9420bcf84288dcd3d4c1ef447dc7f7e431db9255ee0ad86f217b875ff0a68f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722623833397503665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc8d2d8519de6da3549ce7a72a948dc9c197ac7db99b9ac0f4c79ca198c10ae,PodSandboxId:2101f4df6236b862ca285935397cb60a5375e11282d493d4f4d2619f5b09f8ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722623833064649040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f420346,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8e18ca5250cd31cab07ac5145c9d44598dabceec35bca2d3fff85a37a2c511,PodSandboxId:dbd34b845ef400f96156cff03f38b265dbcd043d5c363b10fd2537dc4003fc38,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722623829336411642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e5e20af30de66cbfed75a95fa669c0aaa0641deecd2064c8da6edb7f0663ec,PodSandboxId:b85a4b7c57f52f1f50723803ccc0dd1809b12a15e42683bf332ff8dc3e05a0dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722623829279967398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b4
7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716d4ee88cae914140a385f450eb5202f76dc4d1de2c930c6d5ef68c5e3ea46,PodSandboxId:ba9db4d615d05ece5860fe67f7e73d1544488ea6a7078ec18948fa70281db421,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722623829304109053,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3aecacfbf4c58f0b2be72a05b7235529f904e7a84ca65b69e993440259c6f21,PodSandboxId:c273d0224b7ab354c7d43e71a89a030161be9549f94b4b5c954caee9b65136a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722623829256819930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io
.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544,PodSandboxId:78fd98f44b2e6dd575f2278b3b8102789f6ecee45cd7a1003515ae27f5805bae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722623814244089486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f4203
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d,PodSandboxId:24c3da47f854c4890d9bd1c169cba8c210e6a51815471bd93d1e73a199b4c3ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722623814858930023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8,PodSandboxId:f81dc08a75a6fc248d3b738f13b85b86624dae4c99e87c0d5d3f3c5be502da45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722623814221669668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7,PodSandboxId:c77d1417718b6d458802c3de472e0220327844b75642b75b1e13d04980d5c070,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722623814138527220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe,PodSandboxId:5fa064106599a29cec2fe88172920c45b82207c52fa78625157e779ea5096173,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722623814168966816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: dd70b4af1f21d296a10445f25a0431af,},Annotations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6,PodSandboxId:2aa7a486127f76e1673831093b823d9f953b1a1911eba8be573e75b091112b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722623814106134163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54d49774-6efd-40e3-b3d3-8ec62a37aae4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.855617968Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=daf59281-fb52-46ef-826e-b2437f0a255c name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.855937575Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ac9420bcf84288dcd3d4c1ef447dc7f7e431db9255ee0ad86f217b875ff0a68f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-5ffnn,Uid:201bc75b-6530-4c5b-8fc6-ae08db2bcf12,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722623833024132456,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T18:37:12.619976819Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2101f4df6236b862ca285935397cb60a5375e11282d493d4f4d2619f5b09f8ca,Metadata:&PodSandboxMetadata{Name:kube-proxy-b4mf7,Uid:22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1722623832951342040,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T18:37:12.619987240Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dbd34b845ef400f96156cff03f38b265dbcd043d5c363b10fd2537dc4003fc38,Metadata:&PodSandboxMetadata{Name:etcd-pause-455569,Uid:dd70b4af1f21d296a10445f25a0431af,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722623829116092141,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.39.26:2379,kubernetes.io/config.hash: dd70b4af1f21d296a10445f25a0431af,kubernetes.io/config.seen: 2024-08-02T18:37:08.620799181Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b85a4b7c57f52f1f50723803ccc0dd1809b12a15e42683bf332ff8dc3e05a0dc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-455569,Uid:59d36393c6d3cc00baaad9eefe8d2b47,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722623829104599672,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 59d36393c6d3cc00baaad9eefe8d2b47,kubernetes.io/config.seen: 2024-08-02T18:37:08.620801288Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ba9db4d615d05ece5860fe67f7e73d1544
488ea6a7078ec18948fa70281db421,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-455569,Uid:2893a33bc31a1e8eccfadfb90793698b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722623829103227404,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2893a33bc31a1e8eccfadfb90793698b,kubernetes.io/config.seen: 2024-08-02T18:37:08.620795297Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c273d0224b7ab354c7d43e71a89a030161be9549f94b4b5c954caee9b65136a4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-455569,Uid:90f9b6215e37d314780312b920c52725,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722623829099273858,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.26:8443,kubernetes.io/config.hash: 90f9b6215e37d314780312b920c52725,kubernetes.io/config.seen: 2024-08-02T18:37:08.620800390Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:24c3da47f854c4890d9bd1c169cba8c210e6a51815471bd93d1e73a199b4c3ee,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-5ffnn,Uid:201bc75b-6530-4c5b-8fc6-ae08db2bcf12,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813973479371,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/c
onfig.seen: 2024-08-02T18:36:13.171463568Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2aa7a486127f76e1673831093b823d9f953b1a1911eba8be573e75b091112b09,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-455569,Uid:59d36393c6d3cc00baaad9eefe8d2b47,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813764509412,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 59d36393c6d3cc00baaad9eefe8d2b47,kubernetes.io/config.seen: 2024-08-02T18:35:59.377516736Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c77d1417718b6d458802c3de472e0220327844b75642b75b1e13d04980d5c070,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-455569,Uid:90f9b6215e37d314780312b920c5272
5,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813761079142,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.26:8443,kubernetes.io/config.hash: 90f9b6215e37d314780312b920c52725,kubernetes.io/config.seen: 2024-08-02T18:35:59.377515473Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5fa064106599a29cec2fe88172920c45b82207c52fa78625157e779ea5096173,Metadata:&PodSandboxMetadata{Name:etcd-pause-455569,Uid:dd70b4af1f21d296a10445f25a0431af,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813755400517,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.26:2379,kubernetes.io/config.hash: dd70b4af1f21d296a10445f25a0431af,kubernetes.io/config.seen: 2024-08-02T18:35:59.377512257Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:78fd98f44b2e6dd575f2278b3b8102789f6ecee45cd7a1003515ae27f5805bae,Metadata:&PodSandboxMetadata{Name:kube-proxy-b4mf7,Uid:22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813742658948,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T18:36:13.063647160Z,kubernetes.i
o/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f81dc08a75a6fc248d3b738f13b85b86624dae4c99e87c0d5d3f3c5be502da45,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-455569,Uid:2893a33bc31a1e8eccfadfb90793698b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813698828995,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2893a33bc31a1e8eccfadfb90793698b,kubernetes.io/config.seen: 2024-08-02T18:35:59.377518065Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=daf59281-fb52-46ef-826e-b2437f0a255c name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.856683980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=498cc215-69ea-43c2-add6-7d83c14cf6b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.856743721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=498cc215-69ea-43c2-add6-7d83c14cf6b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.857256090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc89b19747e3f8b9e24997bb871f8597d77a5e02ad1d578a4eaacda2e00c9fb1,PodSandboxId:ac9420bcf84288dcd3d4c1ef447dc7f7e431db9255ee0ad86f217b875ff0a68f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722623833397503665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc8d2d8519de6da3549ce7a72a948dc9c197ac7db99b9ac0f4c79ca198c10ae,PodSandboxId:2101f4df6236b862ca285935397cb60a5375e11282d493d4f4d2619f5b09f8ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722623833064649040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f420346,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8e18ca5250cd31cab07ac5145c9d44598dabceec35bca2d3fff85a37a2c511,PodSandboxId:dbd34b845ef400f96156cff03f38b265dbcd043d5c363b10fd2537dc4003fc38,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722623829336411642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e5e20af30de66cbfed75a95fa669c0aaa0641deecd2064c8da6edb7f0663ec,PodSandboxId:b85a4b7c57f52f1f50723803ccc0dd1809b12a15e42683bf332ff8dc3e05a0dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722623829279967398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b4
7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716d4ee88cae914140a385f450eb5202f76dc4d1de2c930c6d5ef68c5e3ea46,PodSandboxId:ba9db4d615d05ece5860fe67f7e73d1544488ea6a7078ec18948fa70281db421,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722623829304109053,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3aecacfbf4c58f0b2be72a05b7235529f904e7a84ca65b69e993440259c6f21,PodSandboxId:c273d0224b7ab354c7d43e71a89a030161be9549f94b4b5c954caee9b65136a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722623829256819930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io
.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544,PodSandboxId:78fd98f44b2e6dd575f2278b3b8102789f6ecee45cd7a1003515ae27f5805bae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722623814244089486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f4203
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d,PodSandboxId:24c3da47f854c4890d9bd1c169cba8c210e6a51815471bd93d1e73a199b4c3ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722623814858930023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8,PodSandboxId:f81dc08a75a6fc248d3b738f13b85b86624dae4c99e87c0d5d3f3c5be502da45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722623814221669668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7,PodSandboxId:c77d1417718b6d458802c3de472e0220327844b75642b75b1e13d04980d5c070,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722623814138527220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe,PodSandboxId:5fa064106599a29cec2fe88172920c45b82207c52fa78625157e779ea5096173,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722623814168966816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: dd70b4af1f21d296a10445f25a0431af,},Annotations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6,PodSandboxId:2aa7a486127f76e1673831093b823d9f953b1a1911eba8be573e75b091112b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722623814106134163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=498cc215-69ea-43c2-add6-7d83c14cf6b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.886337517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d216d1b-a833-4035-a79d-2b00a4e6d128 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.886429179Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d216d1b-a833-4035-a79d-2b00a4e6d128 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.887505869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33e41b5f-f028-41fc-b5b6-69766cbe6f9f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.887943942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623850887904348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33e41b5f-f028-41fc-b5b6-69766cbe6f9f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.888724326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c594e72-9024-4749-8111-a4668efc7ce3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.888874424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c594e72-9024-4749-8111-a4668efc7ce3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.889775233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc89b19747e3f8b9e24997bb871f8597d77a5e02ad1d578a4eaacda2e00c9fb1,PodSandboxId:ac9420bcf84288dcd3d4c1ef447dc7f7e431db9255ee0ad86f217b875ff0a68f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722623833397503665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc8d2d8519de6da3549ce7a72a948dc9c197ac7db99b9ac0f4c79ca198c10ae,PodSandboxId:2101f4df6236b862ca285935397cb60a5375e11282d493d4f4d2619f5b09f8ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722623833064649040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f420346,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8e18ca5250cd31cab07ac5145c9d44598dabceec35bca2d3fff85a37a2c511,PodSandboxId:dbd34b845ef400f96156cff03f38b265dbcd043d5c363b10fd2537dc4003fc38,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722623829336411642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e5e20af30de66cbfed75a95fa669c0aaa0641deecd2064c8da6edb7f0663ec,PodSandboxId:b85a4b7c57f52f1f50723803ccc0dd1809b12a15e42683bf332ff8dc3e05a0dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722623829279967398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b4
7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716d4ee88cae914140a385f450eb5202f76dc4d1de2c930c6d5ef68c5e3ea46,PodSandboxId:ba9db4d615d05ece5860fe67f7e73d1544488ea6a7078ec18948fa70281db421,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722623829304109053,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3aecacfbf4c58f0b2be72a05b7235529f904e7a84ca65b69e993440259c6f21,PodSandboxId:c273d0224b7ab354c7d43e71a89a030161be9549f94b4b5c954caee9b65136a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722623829256819930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io
.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544,PodSandboxId:78fd98f44b2e6dd575f2278b3b8102789f6ecee45cd7a1003515ae27f5805bae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722623814244089486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f4203
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d,PodSandboxId:24c3da47f854c4890d9bd1c169cba8c210e6a51815471bd93d1e73a199b4c3ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722623814858930023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8,PodSandboxId:f81dc08a75a6fc248d3b738f13b85b86624dae4c99e87c0d5d3f3c5be502da45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722623814221669668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7,PodSandboxId:c77d1417718b6d458802c3de472e0220327844b75642b75b1e13d04980d5c070,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722623814138527220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe,PodSandboxId:5fa064106599a29cec2fe88172920c45b82207c52fa78625157e779ea5096173,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722623814168966816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: dd70b4af1f21d296a10445f25a0431af,},Annotations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6,PodSandboxId:2aa7a486127f76e1673831093b823d9f953b1a1911eba8be573e75b091112b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722623814106134163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c594e72-9024-4749-8111-a4668efc7ce3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.944485954Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50f94986-55a9-4bb9-8679-c7092150ee53 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.944595155Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50f94986-55a9-4bb9-8679-c7092150ee53 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.945672061Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ecba85a-19c8-4d9b-9552-5c75a63e2bc2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.946025587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623850946004521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ecba85a-19c8-4d9b-9552-5c75a63e2bc2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.946654749Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec6c4fce-38cf-449e-9123-e133d4c00be7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.946719931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec6c4fce-38cf-449e-9123-e133d4c00be7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:30 pause-455569 crio[2709]: time="2024-08-02 18:37:30.946963229Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc89b19747e3f8b9e24997bb871f8597d77a5e02ad1d578a4eaacda2e00c9fb1,PodSandboxId:ac9420bcf84288dcd3d4c1ef447dc7f7e431db9255ee0ad86f217b875ff0a68f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722623833397503665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc8d2d8519de6da3549ce7a72a948dc9c197ac7db99b9ac0f4c79ca198c10ae,PodSandboxId:2101f4df6236b862ca285935397cb60a5375e11282d493d4f4d2619f5b09f8ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722623833064649040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f420346,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8e18ca5250cd31cab07ac5145c9d44598dabceec35bca2d3fff85a37a2c511,PodSandboxId:dbd34b845ef400f96156cff03f38b265dbcd043d5c363b10fd2537dc4003fc38,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722623829336411642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e5e20af30de66cbfed75a95fa669c0aaa0641deecd2064c8da6edb7f0663ec,PodSandboxId:b85a4b7c57f52f1f50723803ccc0dd1809b12a15e42683bf332ff8dc3e05a0dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722623829279967398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b4
7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716d4ee88cae914140a385f450eb5202f76dc4d1de2c930c6d5ef68c5e3ea46,PodSandboxId:ba9db4d615d05ece5860fe67f7e73d1544488ea6a7078ec18948fa70281db421,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722623829304109053,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3aecacfbf4c58f0b2be72a05b7235529f904e7a84ca65b69e993440259c6f21,PodSandboxId:c273d0224b7ab354c7d43e71a89a030161be9549f94b4b5c954caee9b65136a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722623829256819930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io
.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544,PodSandboxId:78fd98f44b2e6dd575f2278b3b8102789f6ecee45cd7a1003515ae27f5805bae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722623814244089486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f4203
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d,PodSandboxId:24c3da47f854c4890d9bd1c169cba8c210e6a51815471bd93d1e73a199b4c3ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722623814858930023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8,PodSandboxId:f81dc08a75a6fc248d3b738f13b85b86624dae4c99e87c0d5d3f3c5be502da45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722623814221669668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7,PodSandboxId:c77d1417718b6d458802c3de472e0220327844b75642b75b1e13d04980d5c070,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722623814138527220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe,PodSandboxId:5fa064106599a29cec2fe88172920c45b82207c52fa78625157e779ea5096173,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722623814168966816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: dd70b4af1f21d296a10445f25a0431af,},Annotations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6,PodSandboxId:2aa7a486127f76e1673831093b823d9f953b1a1911eba8be573e75b091112b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722623814106134163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec6c4fce-38cf-449e-9123-e133d4c00be7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cc89b19747e3f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 seconds ago      Running             coredns                   2                   ac9420bcf8428       coredns-7db6d8ff4d-5ffnn
	dcc8d2d8519de       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   17 seconds ago      Running             kube-proxy                2                   2101f4df6236b       kube-proxy-b4mf7
	2c8e18ca5250c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago      Running             etcd                      2                   dbd34b845ef40       etcd-pause-455569
	5716d4ee88cae       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   21 seconds ago      Running             kube-scheduler            2                   ba9db4d615d05       kube-scheduler-pause-455569
	44e5e20af30de       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   21 seconds ago      Running             kube-controller-manager   2                   b85a4b7c57f52       kube-controller-manager-pause-455569
	c3aecacfbf4c5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   21 seconds ago      Running             kube-apiserver            2                   c273d0224b7ab       kube-apiserver-pause-455569
	bb1163e84ba44       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago      Exited              coredns                   1                   24c3da47f854c       coredns-7db6d8ff4d-5ffnn
	e474aad35defa       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   36 seconds ago      Exited              kube-proxy                1                   78fd98f44b2e6       kube-proxy-b4mf7
	3d8d0760aafd3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   36 seconds ago      Exited              kube-scheduler            1                   f81dc08a75a6f       kube-scheduler-pause-455569
	d17d2954528e5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   36 seconds ago      Exited              etcd                      1                   5fa064106599a       etcd-pause-455569
	c767e060079f5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   36 seconds ago      Exited              kube-apiserver            1                   c77d1417718b6       kube-apiserver-pause-455569
	64a6eabb02ce1       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   36 seconds ago      Exited              kube-controller-manager   1                   2aa7a486127f7       kube-controller-manager-pause-455569
	
	
	==> coredns [bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:34809 - 41704 "HINFO IN 110192597553160868.684303896014414589. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.009315332s
	
	
	==> coredns [cc89b19747e3f8b9e24997bb871f8597d77a5e02ad1d578a4eaacda2e00c9fb1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39731 - 28807 "HINFO IN 4922020822728551846.3276665842435115586. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010589073s
	
	
	==> describe nodes <==
	Name:               pause-455569
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-455569
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=pause-455569
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T18_36_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:35:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-455569
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:37:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 18:37:12 +0000   Fri, 02 Aug 2024 18:35:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 18:37:12 +0000   Fri, 02 Aug 2024 18:35:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 18:37:12 +0000   Fri, 02 Aug 2024 18:35:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 18:37:12 +0000   Fri, 02 Aug 2024 18:36:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    pause-455569
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac767b38fd8c4f5786712313f8649e7f
	  System UUID:                ac767b38-fd8c-4f57-8671-2313f8649e7f
	  Boot ID:                    46d84004-9124-4cb9-bd03-a90321200821
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5ffnn                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     78s
	  kube-system                 etcd-pause-455569                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kube-apiserver-pause-455569             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-controller-manager-pause-455569    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-b4mf7                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-pause-455569             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 77s                kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientPID     92s                kubelet          Node pause-455569 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node pause-455569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node pause-455569 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeReady                91s                kubelet          Node pause-455569 status is now: NodeReady
	  Normal  RegisteredNode           79s                node-controller  Node pause-455569 event: Registered Node pause-455569 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-455569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-455569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-455569 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-455569 event: Registered Node pause-455569 in Controller
	
	
	==> dmesg <==
	[ +10.495973] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.057306] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.047911] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.198508] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.123798] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.256233] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +4.187352] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.574879] systemd-fstab-generator[926]: Ignoring "noauto" option for root device
	[  +0.058773] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.991092] systemd-fstab-generator[1267]: Ignoring "noauto" option for root device
	[  +0.088586] kauditd_printk_skb: 69 callbacks suppressed
	[Aug 2 18:36] systemd-fstab-generator[1460]: Ignoring "noauto" option for root device
	[  +0.100129] kauditd_printk_skb: 21 callbacks suppressed
	[ +41.185369] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.294729] systemd-fstab-generator[2281]: Ignoring "noauto" option for root device
	[  +0.189727] systemd-fstab-generator[2349]: Ignoring "noauto" option for root device
	[  +0.323284] systemd-fstab-generator[2522]: Ignoring "noauto" option for root device
	[  +0.328590] systemd-fstab-generator[2575]: Ignoring "noauto" option for root device
	[  +0.599928] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[Aug 2 18:37] systemd-fstab-generator[2959]: Ignoring "noauto" option for root device
	[  +0.081669] kauditd_printk_skb: 173 callbacks suppressed
	[  +1.998248] systemd-fstab-generator[3082]: Ignoring "noauto" option for root device
	[  +4.549181] kauditd_printk_skb: 86 callbacks suppressed
	[ +11.994837] kauditd_printk_skb: 25 callbacks suppressed
	[  +1.711306] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	
	
	==> etcd [2c8e18ca5250cd31cab07ac5145c9d44598dabceec35bca2d3fff85a37a2c511] <==
	{"level":"info","ts":"2024-08-02T18:37:24.881004Z","caller":"traceutil/trace.go:171","msg":"trace[547944485] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:460; }","duration":"320.477733ms","start":"2024-08-02T18:37:24.560516Z","end":"2024-08-02T18:37:24.880994Z","steps":["trace[547944485] 'agreement among raft nodes before linearized reading'  (duration: 320.398424ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:24.881032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:24.560508Z","time spent":"320.514823ms","remote":"127.0.0.1:44800","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":231,"request content":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" "}
	{"level":"warn","ts":"2024-08-02T18:37:24.881262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.585416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" ","response":"range_response_count:1 size:6989"}
	{"level":"info","ts":"2024-08-02T18:37:24.881311Z","caller":"traceutil/trace.go:171","msg":"trace[427933947] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-455569; range_end:; response_count:1; response_revision:460; }","duration":"305.703632ms","start":"2024-08-02T18:37:24.575596Z","end":"2024-08-02T18:37:24.8813Z","steps":["trace[427933947] 'agreement among raft nodes before linearized reading'  (duration: 305.555492ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:24.88134Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:24.575579Z","time spent":"305.752445ms","remote":"127.0.0.1:44776","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":7013,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" "}
	{"level":"info","ts":"2024-08-02T18:37:25.276735Z","caller":"traceutil/trace.go:171","msg":"trace[1929792185] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:498; }","duration":"200.96358ms","start":"2024-08-02T18:37:25.075754Z","end":"2024-08-02T18:37:25.276718Z","steps":["trace[1929792185] 'read index received'  (duration: 125.192522ms)","trace[1929792185] 'applied index is now lower than readState.Index'  (duration: 75.770244ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T18:37:25.276994Z","caller":"traceutil/trace.go:171","msg":"trace[1624319750] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"379.380698ms","start":"2024-08-02T18:37:24.897599Z","end":"2024-08-02T18:37:25.276979Z","steps":["trace[1624319750] 'process raft request'  (duration: 303.348627ms)","trace[1624319750] 'compare'  (duration: 75.676245ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:37:25.278679Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:24.897588Z","time spent":"381.033404ms","remote":"127.0.0.1:45074","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:399 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2024-08-02T18:37:25.277165Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.590094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T18:37:25.27887Z","caller":"traceutil/trace.go:171","msg":"trace[1321782796] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:462; }","duration":"115.343118ms","start":"2024-08-02T18:37:25.163514Z","end":"2024-08-02T18:37:25.278857Z","steps":["trace[1321782796] 'agreement among raft nodes before linearized reading'  (duration: 113.58704ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:25.277268Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.527514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" ","response":"range_response_count:1 size:6989"}
	{"level":"info","ts":"2024-08-02T18:37:25.279027Z","caller":"traceutil/trace.go:171","msg":"trace[912927533] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-455569; range_end:; response_count:1; response_revision:462; }","duration":"203.309402ms","start":"2024-08-02T18:37:25.075709Z","end":"2024-08-02T18:37:25.279018Z","steps":["trace[912927533] 'agreement among raft nodes before linearized reading'  (duration: 201.521325ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:25.732107Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.085792ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12938156821228985899 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" mod_revision:391 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" value_size:6912 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-02T18:37:25.732606Z","caller":"traceutil/trace.go:171","msg":"trace[897316173] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"440.273588ms","start":"2024-08-02T18:37:25.292312Z","end":"2024-08-02T18:37:25.732586Z","steps":["trace[897316173] 'process raft request'  (duration: 315.603529ms)","trace[897316173] 'compare'  (duration: 123.981904ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:37:25.732701Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:25.292301Z","time spent":"440.35349ms","remote":"127.0.0.1:44776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6974,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" mod_revision:391 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" value_size:6912 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" > >"}
	{"level":"warn","ts":"2024-08-02T18:37:26.223398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.860162ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12938156821228985900 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:462 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-02T18:37:26.223498Z","caller":"traceutil/trace.go:171","msg":"trace[174708686] linearizableReadLoop","detail":"{readStateIndex:501; appliedIndex:499; }","duration":"647.779247ms","start":"2024-08-02T18:37:25.575705Z","end":"2024-08-02T18:37:26.223484Z","steps":["trace[174708686] 'read index received'  (duration: 32.219223ms)","trace[174708686] 'applied index is now lower than readState.Index'  (duration: 615.555969ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T18:37:26.223581Z","caller":"traceutil/trace.go:171","msg":"trace[1291130132] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"928.490646ms","start":"2024-08-02T18:37:25.29508Z","end":"2024-08-02T18:37:26.223571Z","steps":["trace[1291130132] 'process raft request'  (duration: 648.378583ms)","trace[1291130132] 'compare'  (duration: 279.675524ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:37:26.223666Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:25.29507Z","time spent":"928.549177ms","remote":"127.0.0.1:45074","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:462 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2024-08-02T18:37:26.223807Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"648.097919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" ","response":"range_response_count:1 size:6989"}
	{"level":"info","ts":"2024-08-02T18:37:26.223856Z","caller":"traceutil/trace.go:171","msg":"trace[1039103649] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-455569; range_end:; response_count:1; response_revision:464; }","duration":"648.140198ms","start":"2024-08-02T18:37:25.5757Z","end":"2024-08-02T18:37:26.22384Z","steps":["trace[1039103649] 'agreement among raft nodes before linearized reading'  (duration: 648.049326ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:26.223897Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:25.575682Z","time spent":"648.206656ms","remote":"127.0.0.1:44776","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":7013,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" "}
	{"level":"warn","ts":"2024-08-02T18:37:26.224087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"481.284418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" ","response":"range_response_count:1 size:6989"}
	{"level":"info","ts":"2024-08-02T18:37:26.224132Z","caller":"traceutil/trace.go:171","msg":"trace[916008426] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-455569; range_end:; response_count:1; response_revision:464; }","duration":"481.356026ms","start":"2024-08-02T18:37:25.742767Z","end":"2024-08-02T18:37:26.224123Z","steps":["trace[916008426] 'agreement among raft nodes before linearized reading'  (duration: 481.290039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:26.224155Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:25.742754Z","time spent":"481.395745ms","remote":"127.0.0.1:44776","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":7013,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" "}
	
	
	==> etcd [d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe] <==
	{"level":"info","ts":"2024-08-02T18:36:55.011343Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-02T18:36:55.191737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-02T18:36:55.191817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-02T18:36:55.191853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgPreVoteResp from c9867c1935b8b38d at term 2"}
	{"level":"info","ts":"2024-08-02T18:36:55.191873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became candidate at term 3"}
	{"level":"info","ts":"2024-08-02T18:36:55.191884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgVoteResp from c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-08-02T18:36:55.191901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became leader at term 3"}
	{"level":"info","ts":"2024-08-02T18:36:55.191914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c9867c1935b8b38d elected leader c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-08-02T18:36:55.209563Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c9867c1935b8b38d","local-member-attributes":"{Name:pause-455569 ClientURLs:[https://192.168.39.26:2379]}","request-path":"/0/members/c9867c1935b8b38d/attributes","cluster-id":"8cfb77a10e566a07","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-02T18:36:55.209616Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:36:55.210097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:36:55.234681Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.26:2379"}
	{"level":"info","ts":"2024-08-02T18:36:55.241144Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-02T18:36:55.258803Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-02T18:36:55.280868Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-02T18:36:55.701405Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-02T18:36:55.701505Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-455569","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	{"level":"warn","ts":"2024-08-02T18:36:55.701623Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:36:55.701665Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:36:55.707221Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:36:55.707272Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-02T18:36:55.707327Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c9867c1935b8b38d","current-leader-member-id":"c9867c1935b8b38d"}
	{"level":"info","ts":"2024-08-02T18:36:55.717073Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-02T18:36:55.721388Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-02T18:36:55.721437Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-455569","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	
	
	==> kernel <==
	 18:37:31 up 2 min,  0 users,  load average: 0.76, 0.34, 0.13
	Linux pause-455569 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c3aecacfbf4c58f0b2be72a05b7235529f904e7a84ca65b69e993440259c6f21] <==
	I0802 18:37:12.352959       1 policy_source.go:224] refreshing policies
	I0802 18:37:12.395889       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 18:37:12.396436       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0802 18:37:12.396529       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0802 18:37:12.396665       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0802 18:37:12.398495       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 18:37:12.401277       1 shared_informer.go:320] Caches are synced for configmaps
	I0802 18:37:12.401933       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 18:37:12.402236       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0802 18:37:12.409471       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0802 18:37:13.217153       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 18:37:13.905410       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0802 18:37:13.919944       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0802 18:37:13.976350       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0802 18:37:14.016250       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 18:37:14.024718       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 18:37:24.514398       1 controller.go:615] quota admission added evaluator for: endpoints
	I0802 18:37:24.896023       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0802 18:37:26.224909       1 trace.go:236] Trace[1766215622]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:7ace95dc-854f-4c60-9265-92e6ba476608,client:192.168.39.26,api-group:apps,api-version:v1,name:coredns,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:kube-controller-manager/v1.30.3 (linux/amd64) kubernetes/6fc0a69/system:serviceaccount:kube-system:deployment-controller,verb:PUT (02-Aug-2024 18:37:25.292) (total time: 932ms):
	Trace[1766215622]: ["GuaranteedUpdate etcd3" audit-id:7ace95dc-854f-4c60-9265-92e6ba476608,key:/deployments/kube-system/coredns,type:*apps.Deployment,resource:deployments.apps 932ms (18:37:25.292)
	Trace[1766215622]:  ---"Txn call completed" 929ms (18:37:26.224)]
	Trace[1766215622]: [932.142619ms] [932.142619ms] END
	I0802 18:37:26.227236       1 trace.go:236] Trace[437600409]: "Get" accept:application/json, */*,audit-id:98f7a1c4-ea54-433c-adb7-9efb2f322d8f,client:192.168.39.1,api-group:,api-version:v1,name:kube-apiserver-pause-455569,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-455569,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (02-Aug-2024 18:37:25.575) (total time: 652ms):
	Trace[437600409]: ---"About to write a response" 650ms (18:37:26.225)
	Trace[437600409]: [652.043182ms] [652.043182ms] END
	
	
	==> kube-apiserver [c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7] <==
	I0802 18:36:54.974491       1 options.go:221] external host was not specified, using 192.168.39.26
	I0802 18:36:54.975607       1 server.go:148] Version: v1.30.3
	I0802 18:36:54.975642       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [44e5e20af30de66cbfed75a95fa669c0aaa0641deecd2064c8da6edb7f0663ec] <==
	I0802 18:37:24.586715       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0802 18:37:24.587913       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0802 18:37:24.591274       1 shared_informer.go:320] Caches are synced for TTL
	I0802 18:37:24.595636       1 shared_informer.go:320] Caches are synced for PV protection
	I0802 18:37:24.596863       1 shared_informer.go:320] Caches are synced for node
	I0802 18:37:24.596947       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0802 18:37:24.596978       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0802 18:37:24.596983       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0802 18:37:24.596989       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0802 18:37:24.608648       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0802 18:37:24.615018       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0802 18:37:24.669088       1 shared_informer.go:320] Caches are synced for stateful set
	I0802 18:37:24.673387       1 shared_informer.go:320] Caches are synced for PVC protection
	I0802 18:37:24.698877       1 shared_informer.go:320] Caches are synced for persistent volume
	I0802 18:37:24.701378       1 shared_informer.go:320] Caches are synced for attach detach
	I0802 18:37:24.704035       1 shared_informer.go:320] Caches are synced for ephemeral
	I0802 18:37:24.719804       1 shared_informer.go:320] Caches are synced for expand
	I0802 18:37:24.736166       1 shared_informer.go:320] Caches are synced for resource quota
	I0802 18:37:24.777259       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0802 18:37:24.781403       1 shared_informer.go:320] Caches are synced for cronjob
	I0802 18:37:24.786158       1 shared_informer.go:320] Caches are synced for resource quota
	I0802 18:37:24.805411       1 shared_informer.go:320] Caches are synced for job
	I0802 18:37:25.212530       1 shared_informer.go:320] Caches are synced for garbage collector
	I0802 18:37:25.264284       1 shared_informer.go:320] Caches are synced for garbage collector
	I0802 18:37:25.264441       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6] <==
	
	
	==> kube-proxy [dcc8d2d8519de6da3549ce7a72a948dc9c197ac7db99b9ac0f4c79ca198c10ae] <==
	I0802 18:37:13.219744       1 server_linux.go:69] "Using iptables proxy"
	I0802 18:37:13.235745       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.26"]
	I0802 18:37:13.309493       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 18:37:13.309554       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 18:37:13.309574       1 server_linux.go:165] "Using iptables Proxier"
	I0802 18:37:13.315295       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 18:37:13.315475       1 server.go:872] "Version info" version="v1.30.3"
	I0802 18:37:13.315500       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:37:13.316906       1 config.go:192] "Starting service config controller"
	I0802 18:37:13.316945       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 18:37:13.316970       1 config.go:101] "Starting endpoint slice config controller"
	I0802 18:37:13.316974       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 18:37:13.317548       1 config.go:319] "Starting node config controller"
	I0802 18:37:13.317572       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 18:37:13.418038       1 shared_informer.go:320] Caches are synced for node config
	I0802 18:37:13.418248       1 shared_informer.go:320] Caches are synced for service config
	I0802 18:37:13.418272       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544] <==
	
	
	==> kube-scheduler [3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8] <==
	
	
	==> kube-scheduler [5716d4ee88cae914140a385f450eb5202f76dc4d1de2c930c6d5ef68c5e3ea46] <==
	I0802 18:37:10.184874       1 serving.go:380] Generated self-signed cert in-memory
	W0802 18:37:12.234265       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 18:37:12.234437       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 18:37:12.234467       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 18:37:12.234532       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 18:37:12.296093       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0802 18:37:12.297306       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:37:12.303878       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 18:37:12.303960       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 18:37:12.304798       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 18:37:12.309167       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0802 18:37:12.405099       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848517    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90f9b6215e37d314780312b920c52725-usr-share-ca-certificates\") pod \"kube-apiserver-pause-455569\" (UID: \"90f9b6215e37d314780312b920c52725\") " pod="kube-system/kube-apiserver-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848534    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59d36393c6d3cc00baaad9eefe8d2b47-flexvolume-dir\") pod \"kube-controller-manager-pause-455569\" (UID: \"59d36393c6d3cc00baaad9eefe8d2b47\") " pod="kube-system/kube-controller-manager-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848547    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59d36393c6d3cc00baaad9eefe8d2b47-k8s-certs\") pod \"kube-controller-manager-pause-455569\" (UID: \"59d36393c6d3cc00baaad9eefe8d2b47\") " pod="kube-system/kube-controller-manager-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848564    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59d36393c6d3cc00baaad9eefe8d2b47-kubeconfig\") pod \"kube-controller-manager-pause-455569\" (UID: \"59d36393c6d3cc00baaad9eefe8d2b47\") " pod="kube-system/kube-controller-manager-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848580    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59d36393c6d3cc00baaad9eefe8d2b47-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-455569\" (UID: \"59d36393c6d3cc00baaad9eefe8d2b47\") " pod="kube-system/kube-controller-manager-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848593    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90f9b6215e37d314780312b920c52725-k8s-certs\") pod \"kube-apiserver-pause-455569\" (UID: \"90f9b6215e37d314780312b920c52725\") " pod="kube-system/kube-apiserver-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.945166    3089 kubelet_node_status.go:73] "Attempting to register node" node="pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: E0802 18:37:08.946166    3089 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.26:8443: connect: connection refused" node="pause-455569"
	Aug 02 18:37:09 pause-455569 kubelet[3089]: E0802 18:37:09.247935    3089 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-455569?timeout=10s\": dial tcp 192.168.39.26:8443: connect: connection refused" interval="800ms"
	Aug 02 18:37:09 pause-455569 kubelet[3089]: I0802 18:37:09.350348    3089 kubelet_node_status.go:73] "Attempting to register node" node="pause-455569"
	Aug 02 18:37:09 pause-455569 kubelet[3089]: E0802 18:37:09.351793    3089 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.26:8443: connect: connection refused" node="pause-455569"
	Aug 02 18:37:09 pause-455569 kubelet[3089]: W0802 18:37:09.511034    3089 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-455569&limit=500&resourceVersion=0": dial tcp 192.168.39.26:8443: connect: connection refused
	Aug 02 18:37:09 pause-455569 kubelet[3089]: E0802 18:37:09.511099    3089 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-455569&limit=500&resourceVersion=0": dial tcp 192.168.39.26:8443: connect: connection refused
	Aug 02 18:37:10 pause-455569 kubelet[3089]: I0802 18:37:10.153959    3089 kubelet_node_status.go:73] "Attempting to register node" node="pause-455569"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.459855    3089 kubelet_node_status.go:112] "Node was previously registered" node="pause-455569"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.460611    3089 kubelet_node_status.go:76] "Successfully registered node" node="pause-455569"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.462341    3089 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.463814    3089 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.616407    3089 apiserver.go:52] "Watching apiserver"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.620230    3089 topology_manager.go:215] "Topology Admit Handler" podUID="201bc75b-6530-4c5b-8fc6-ae08db2bcf12" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5ffnn"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.620374    3089 topology_manager.go:215] "Topology Admit Handler" podUID="22b600e8-e5e0-4602-adf4-a37b0b8a6dbb" podNamespace="kube-system" podName="kube-proxy-b4mf7"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.645856    3089 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.688450    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22b600e8-e5e0-4602-adf4-a37b0b8a6dbb-lib-modules\") pod \"kube-proxy-b4mf7\" (UID: \"22b600e8-e5e0-4602-adf4-a37b0b8a6dbb\") " pod="kube-system/kube-proxy-b4mf7"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.688497    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22b600e8-e5e0-4602-adf4-a37b0b8a6dbb-xtables-lock\") pod \"kube-proxy-b4mf7\" (UID: \"22b600e8-e5e0-4602-adf4-a37b0b8a6dbb\") " pod="kube-system/kube-proxy-b4mf7"
	Aug 02 18:37:22 pause-455569 kubelet[3089]: I0802 18:37:22.005661    3089 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:37:30.474437   52213 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19355-5397/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-455569 -n pause-455569
helpers_test.go:261: (dbg) Run:  kubectl --context pause-455569 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-455569 -n pause-455569
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-455569 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-455569 logs -n 25: (1.389898249s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:32 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-872961         | offline-crio-872961       | jenkins | v1.33.1 | 02 Aug 24 18:32 UTC | 02 Aug 24 18:34 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:32 UTC | 02 Aug 24 18:33 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-079131      | minikube                  | jenkins | v1.26.0 | 02 Aug 24 18:32 UTC | 02 Aug 24 18:34 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-919916    | force-systemd-env-919916  | jenkins | v1.33.1 | 02 Aug 24 18:32 UTC | 02 Aug 24 18:32 UTC |
	| start   | -p kubernetes-upgrade-132946   | kubernetes-upgrade-132946 | jenkins | v1.33.1 | 02 Aug 24 18:32 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:33 UTC | 02 Aug 24 18:34 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p offline-crio-872961         | offline-crio-872961       | jenkins | v1.33.1 | 02 Aug 24 18:34 UTC | 02 Aug 24 18:34 UTC |
	| start   | -p running-upgrade-079131      | running-upgrade-079131    | jenkins | v1.33.1 | 02 Aug 24 18:34 UTC | 02 Aug 24 18:35 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-837935      | minikube                  | jenkins | v1.26.0 | 02 Aug 24 18:34 UTC | 02 Aug 24 18:35 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:34 UTC | 02 Aug 24 18:34 UTC |
	| start   | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:34 UTC | 02 Aug 24 18:35 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-891799 sudo    | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-079131      | running-upgrade-079131    | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:35 UTC |
	| start   | -p pause-455569 --memory=2048  | pause-455569              | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:36 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:35 UTC |
	| start   | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:36 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-837935 stop    | minikube                  | jenkins | v1.26.0 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:35 UTC |
	| start   | -p stopped-upgrade-837935      | stopped-upgrade-837935    | jenkins | v1.33.1 | 02 Aug 24 18:35 UTC | 02 Aug 24 18:36 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-891799 sudo    | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-891799         | NoKubernetes-891799       | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC | 02 Aug 24 18:36 UTC |
	| start   | -p cert-expiration-139745      | cert-expiration-139745    | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC | 02 Aug 24 18:37 UTC |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --cert-expiration=3m           |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p pause-455569                | pause-455569              | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC | 02 Aug 24 18:37 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-837935      | stopped-upgrade-837935    | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC | 02 Aug 24 18:36 UTC |
	| start   | -p force-systemd-flag-234725   | force-systemd-flag-234725 | jenkins | v1.33.1 | 02 Aug 24 18:36 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:36:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:36:44.828869   51814 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:36:44.829155   51814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:36:44.829167   51814 out.go:304] Setting ErrFile to fd 2...
	I0802 18:36:44.829173   51814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:36:44.829376   51814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:36:44.829977   51814 out.go:298] Setting JSON to false
	I0802 18:36:44.830962   51814 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4749,"bootTime":1722619056,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:36:44.831025   51814 start.go:139] virtualization: kvm guest
	I0802 18:36:44.835178   51814 out.go:177] * [force-systemd-flag-234725] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:36:44.839135   51814 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:36:44.839192   51814 notify.go:220] Checking for updates...
	I0802 18:36:44.841777   51814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:36:44.843051   51814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:36:44.844422   51814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:36:44.845716   51814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:36:44.847223   51814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:36:44.848935   51814 config.go:182] Loaded profile config "cert-expiration-139745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:36:44.849036   51814 config.go:182] Loaded profile config "kubernetes-upgrade-132946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0802 18:36:44.849156   51814 config.go:182] Loaded profile config "pause-455569": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:36:44.849237   51814 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:36:44.889361   51814 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 18:36:44.890612   51814 start.go:297] selected driver: kvm2
	I0802 18:36:44.890625   51814 start.go:901] validating driver "kvm2" against <nil>
	I0802 18:36:44.890639   51814 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:36:44.891657   51814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:36:44.891738   51814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:36:44.907998   51814 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:36:44.908058   51814 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 18:36:44.908266   51814 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 18:36:44.908287   51814 cni.go:84] Creating CNI manager for ""
	I0802 18:36:44.908295   51814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:36:44.908302   51814 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 18:36:44.908363   51814 start.go:340] cluster config:
	{Name:force-systemd-flag-234725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-234725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:36:44.908466   51814 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:36:44.910792   51814 out.go:177] * Starting "force-systemd-flag-234725" primary control-plane node in "force-systemd-flag-234725" cluster
	I0802 18:36:42.216656   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.217131   51259 main.go:141] libmachine: (cert-expiration-139745) Found IP for machine: 192.168.61.201
	I0802 18:36:42.217156   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has current primary IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.217165   51259 main.go:141] libmachine: (cert-expiration-139745) Reserving static IP address...
	I0802 18:36:42.217599   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | unable to find host DHCP lease matching {name: "cert-expiration-139745", mac: "52:54:00:ee:71:54", ip: "192.168.61.201"} in network mk-cert-expiration-139745
	I0802 18:36:42.292607   51259 main.go:141] libmachine: (cert-expiration-139745) Reserved static IP address: 192.168.61.201
	I0802 18:36:42.292626   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Getting to WaitForSSH function...
	I0802 18:36:42.292634   51259 main.go:141] libmachine: (cert-expiration-139745) Waiting for SSH to be available...
	I0802 18:36:42.295684   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.296125   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.296155   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.296296   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Using SSH client type: external
	I0802 18:36:42.296318   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa (-rw-------)
	I0802 18:36:42.296365   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 18:36:42.296373   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | About to run SSH command:
	I0802 18:36:42.296385   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | exit 0
	I0802 18:36:42.431559   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | SSH cmd err, output: <nil>: 
	I0802 18:36:42.431820   51259 main.go:141] libmachine: (cert-expiration-139745) KVM machine creation complete!
	I0802 18:36:42.432246   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetConfigRaw
	I0802 18:36:42.432807   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:42.433018   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:42.433212   51259 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 18:36:42.433223   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetState
	I0802 18:36:42.434691   51259 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 18:36:42.434699   51259 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 18:36:42.434704   51259 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 18:36:42.434709   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:42.437316   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.437717   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.437739   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.437916   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:42.438087   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.438222   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.438319   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:42.438450   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:42.438651   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:42.438657   51259 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 18:36:42.546991   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:36:42.547009   51259 main.go:141] libmachine: Detecting the provisioner...
	I0802 18:36:42.547018   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:42.550037   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.550440   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.550464   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.550620   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:42.550788   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.550998   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.551120   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:42.551292   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:42.551459   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:42.551464   51259 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 18:36:42.667784   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 18:36:42.667857   51259 main.go:141] libmachine: found compatible host: buildroot
	I0802 18:36:42.667863   51259 main.go:141] libmachine: Provisioning with buildroot...
	I0802 18:36:42.667869   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetMachineName
	I0802 18:36:42.668134   51259 buildroot.go:166] provisioning hostname "cert-expiration-139745"
	I0802 18:36:42.668170   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetMachineName
	I0802 18:36:42.668411   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:42.671425   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.671931   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.671962   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.672062   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:42.672251   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.672440   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.672618   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:42.672816   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:42.673013   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:42.673024   51259 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-139745 && echo "cert-expiration-139745" | sudo tee /etc/hostname
	I0802 18:36:42.801879   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-139745
	
	I0802 18:36:42.801901   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:42.805018   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.805386   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.805405   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.805644   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:42.805850   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.806046   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:42.806181   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:42.806348   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:42.806516   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:42.806527   51259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-139745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-139745/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-139745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:36:42.930482   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:36:42.930496   51259 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:36:42.930543   51259 buildroot.go:174] setting up certificates
	I0802 18:36:42.930553   51259 provision.go:84] configureAuth start
	I0802 18:36:42.930562   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetMachineName
	I0802 18:36:42.930848   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetIP
	I0802 18:36:42.933608   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.934022   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.934050   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.934201   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:42.936523   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.936837   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:42.936852   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:42.936968   51259 provision.go:143] copyHostCerts
	I0802 18:36:42.937017   51259 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:36:42.937023   51259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:36:42.937084   51259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:36:42.937180   51259 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:36:42.937184   51259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:36:42.937204   51259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:36:42.937250   51259 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:36:42.937253   51259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:36:42.937269   51259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:36:42.937309   51259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-139745 san=[127.0.0.1 192.168.61.201 cert-expiration-139745 localhost minikube]
	I0802 18:36:43.082698   51259 provision.go:177] copyRemoteCerts
	I0802 18:36:43.082746   51259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:36:43.082768   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.085750   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.086185   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.086207   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.086440   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.086624   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.086773   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.086902   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:36:43.176478   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0802 18:36:43.201724   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:36:43.226106   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:36:43.250222   51259 provision.go:87] duration metric: took 319.658347ms to configureAuth
	I0802 18:36:43.250238   51259 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:36:43.250491   51259 config.go:182] Loaded profile config "cert-expiration-139745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:36:43.250570   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.253147   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.253424   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.253447   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.253614   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.253803   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.253967   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.254085   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.254259   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:43.254468   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:43.254478   51259 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:36:43.530960   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:36:43.530977   51259 main.go:141] libmachine: Checking connection to Docker...
	I0802 18:36:43.530988   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetURL
	I0802 18:36:43.532582   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Using libvirt version 6000000
	I0802 18:36:43.535301   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.535678   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.535701   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.535906   51259 main.go:141] libmachine: Docker is up and running!
	I0802 18:36:43.535913   51259 main.go:141] libmachine: Reticulating splines...
	I0802 18:36:43.535917   51259 client.go:171] duration metric: took 23.811578526s to LocalClient.Create
	I0802 18:36:43.535938   51259 start.go:167] duration metric: took 23.811625469s to libmachine.API.Create "cert-expiration-139745"
	I0802 18:36:43.535946   51259 start.go:293] postStartSetup for "cert-expiration-139745" (driver="kvm2")
	I0802 18:36:43.535957   51259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:36:43.535984   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:43.536272   51259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:36:43.536293   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.538918   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.539361   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.539382   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.539556   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.539776   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.539965   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.540109   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:36:43.626284   51259 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:36:43.630319   51259 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:36:43.630333   51259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:36:43.630394   51259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:36:43.630487   51259 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:36:43.630589   51259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:36:43.640203   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:36:43.666008   51259 start.go:296] duration metric: took 130.051153ms for postStartSetup
	I0802 18:36:43.666041   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetConfigRaw
	I0802 18:36:43.666703   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetIP
	I0802 18:36:43.669620   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.670038   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.670061   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.670282   51259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/config.json ...
	I0802 18:36:43.670455   51259 start.go:128] duration metric: took 23.966825616s to createHost
	I0802 18:36:43.670473   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.672773   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.673112   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.673131   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.673290   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.673469   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.673648   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.673796   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.674008   51259 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:43.674211   51259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0802 18:36:43.674223   51259 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 18:36:43.791731   51259 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722623803.767450132
	
	I0802 18:36:43.791742   51259 fix.go:216] guest clock: 1722623803.767450132
	I0802 18:36:43.791762   51259 fix.go:229] Guest: 2024-08-02 18:36:43.767450132 +0000 UTC Remote: 2024-08-02 18:36:43.670461271 +0000 UTC m=+37.912934760 (delta=96.988861ms)
	I0802 18:36:43.791784   51259 fix.go:200] guest clock delta is within tolerance: 96.988861ms
	I0802 18:36:43.791789   51259 start.go:83] releasing machines lock for "cert-expiration-139745", held for 24.088280864s
	I0802 18:36:43.791813   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:43.792044   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetIP
	I0802 18:36:43.795278   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.795685   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.795703   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.795859   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:43.796445   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:43.796678   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:36:43.796783   51259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:36:43.796815   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.797159   51259 ssh_runner.go:195] Run: cat /version.json
	I0802 18:36:43.797176   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:36:43.800208   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.800827   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.800854   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.800890   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.801109   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.801255   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:43.801272   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:43.801313   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.801459   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.801544   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:36:43.801612   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:36:43.801686   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:36:43.801807   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:36:43.801925   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:36:43.914339   51259 ssh_runner.go:195] Run: systemctl --version
	I0802 18:36:43.920258   51259 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:36:44.078958   51259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:36:44.084947   51259 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:36:44.085004   51259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:36:44.101214   51259 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 18:36:44.101230   51259 start.go:495] detecting cgroup driver to use...
	I0802 18:36:44.101315   51259 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:36:44.117987   51259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:36:44.133020   51259 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:36:44.133055   51259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:36:44.146710   51259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:36:44.160424   51259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:36:44.283866   51259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:36:44.455896   51259 docker.go:233] disabling docker service ...
	I0802 18:36:44.455959   51259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:36:44.469269   51259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:36:44.481848   51259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:36:44.613758   51259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:36:44.732610   51259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:36:44.747693   51259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:36:44.770032   51259 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 18:36:44.770077   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.780949   51259 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:36:44.781021   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.794202   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.805412   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.816285   51259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:36:44.827637   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.838127   51259 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.860811   51259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:44.871959   51259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:36:44.883805   51259 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 18:36:44.883854   51259 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 18:36:44.897827   51259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:36:44.907023   51259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:36:45.030074   51259 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:36:45.173080   51259 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:36:45.173144   51259 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:36:45.177770   51259 start.go:563] Will wait 60s for crictl version
	I0802 18:36:45.177818   51259 ssh_runner.go:195] Run: which crictl
	I0802 18:36:45.181264   51259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:36:45.224053   51259 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:36:45.224125   51259 ssh_runner.go:195] Run: crio --version
	I0802 18:36:45.250127   51259 ssh_runner.go:195] Run: crio --version
	I0802 18:36:45.284964   51259 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 18:36:45.286218   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetIP
	I0802 18:36:45.289224   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:45.289765   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:36:45.289787   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:36:45.289967   51259 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0802 18:36:45.294143   51259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:36:45.306906   51259 kubeadm.go:883] updating cluster {Name:cert-expiration-139745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.3 ClusterName:cert-expiration-139745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:36:45.307013   51259 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:36:45.307083   51259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:36:45.343398   51259 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 18:36:45.343466   51259 ssh_runner.go:195] Run: which lz4
	I0802 18:36:45.347646   51259 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 18:36:45.351710   51259 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 18:36:45.351733   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 18:36:43.817189   51349 machine.go:94] provisionDockerMachine start ...
	I0802 18:36:43.817207   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:43.817390   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:43.819966   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.820376   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:43.820416   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.820548   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:43.820711   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.820843   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.820985   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:43.821138   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:43.821320   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:43.821339   51349 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:36:43.939817   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-455569
	
	I0802 18:36:43.939849   51349 main.go:141] libmachine: (pause-455569) Calling .GetMachineName
	I0802 18:36:43.940132   51349 buildroot.go:166] provisioning hostname "pause-455569"
	I0802 18:36:43.940158   51349 main.go:141] libmachine: (pause-455569) Calling .GetMachineName
	I0802 18:36:43.940385   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:43.943989   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.944498   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:43.944536   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:43.944705   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:43.944923   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.945109   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:43.945269   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:43.945424   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:43.945757   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:43.945790   51349 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-455569 && echo "pause-455569" | sudo tee /etc/hostname
	I0802 18:36:44.073861   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-455569
	
	I0802 18:36:44.073897   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:44.535132   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.535596   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:44.535637   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.535814   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:44.536076   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:44.536283   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:44.536432   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:44.536674   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:44.536910   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:44.536936   51349 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-455569' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-455569/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-455569' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:36:44.656659   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:36:44.656695   51349 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:36:44.656719   51349 buildroot.go:174] setting up certificates
	I0802 18:36:44.656735   51349 provision.go:84] configureAuth start
	I0802 18:36:44.656748   51349 main.go:141] libmachine: (pause-455569) Calling .GetMachineName
	I0802 18:36:44.657033   51349 main.go:141] libmachine: (pause-455569) Calling .GetIP
	I0802 18:36:44.660513   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.660915   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:44.660942   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.661141   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:44.663667   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.664015   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:44.664039   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:44.664219   51349 provision.go:143] copyHostCerts
	I0802 18:36:44.664288   51349 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:36:44.664299   51349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:36:44.664354   51349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:36:44.664475   51349 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:36:44.664488   51349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:36:44.664525   51349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:36:44.664610   51349 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:36:44.664621   51349 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:36:44.664685   51349 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:36:44.664757   51349 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.pause-455569 san=[127.0.0.1 192.168.39.26 localhost minikube pause-455569]
	I0802 18:36:45.112605   51349 provision.go:177] copyRemoteCerts
	I0802 18:36:45.112666   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:36:45.112688   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:45.115426   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.115750   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:45.115785   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.115899   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:45.116100   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:45.116263   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:45.116420   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:45.209422   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:36:45.234782   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0802 18:36:45.258977   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:36:45.288678   51349 provision.go:87] duration metric: took 631.931145ms to configureAuth
	I0802 18:36:45.288704   51349 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:36:45.288886   51349 config.go:182] Loaded profile config "pause-455569": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:36:45.288961   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:45.291523   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.291819   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:45.291854   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:45.291998   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:45.292208   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:45.292366   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:45.292492   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:45.292625   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:45.292804   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:45.292820   51349 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:36:44.912092   51814 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:36:44.912141   51814 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 18:36:44.912151   51814 cache.go:56] Caching tarball of preloaded images
	I0802 18:36:44.912267   51814 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:36:44.912280   51814 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 18:36:44.912381   51814 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/force-systemd-flag-234725/config.json ...
	I0802 18:36:44.912399   51814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/force-systemd-flag-234725/config.json: {Name:mk07b892edc5389323866eae005bc07a79c213b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:44.912557   51814 start.go:360] acquireMachinesLock for force-systemd-flag-234725: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:36:46.580125   51259 crio.go:462] duration metric: took 1.232510516s to copy over tarball
	I0802 18:36:46.580208   51259 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 18:36:48.723492   51259 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.143258379s)
	I0802 18:36:48.723511   51259 crio.go:469] duration metric: took 2.143370531s to extract the tarball
	I0802 18:36:48.723516   51259 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 18:36:48.760057   51259 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:36:48.802937   51259 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:36:48.802948   51259 cache_images.go:84] Images are preloaded, skipping loading
	I0802 18:36:48.802954   51259 kubeadm.go:934] updating node { 192.168.61.201 8443 v1.30.3 crio true true} ...
	I0802 18:36:48.803050   51259 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-139745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:cert-expiration-139745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:36:48.803128   51259 ssh_runner.go:195] Run: crio config
	I0802 18:36:48.846580   51259 cni.go:84] Creating CNI manager for ""
	I0802 18:36:48.846590   51259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:36:48.846598   51259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:36:48.846618   51259 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.201 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-139745 NodeName:cert-expiration-139745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 18:36:48.846749   51259 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-139745"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:36:48.846803   51259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 18:36:48.856587   51259 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:36:48.856638   51259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:36:48.865982   51259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0802 18:36:48.882112   51259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:36:48.897398   51259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0802 18:36:48.912591   51259 ssh_runner.go:195] Run: grep 192.168.61.201	control-plane.minikube.internal$ /etc/hosts
	I0802 18:36:48.916254   51259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:36:48.927417   51259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:36:49.046569   51259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:36:49.062430   51259 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745 for IP: 192.168.61.201
	I0802 18:36:49.062441   51259 certs.go:194] generating shared ca certs ...
	I0802 18:36:49.062455   51259 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.062609   51259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:36:49.062640   51259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:36:49.062645   51259 certs.go:256] generating profile certs ...
	I0802 18:36:49.062704   51259 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.key
	I0802 18:36:49.062713   51259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.crt with IP's: []
	I0802 18:36:49.131035   51259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.crt ...
	I0802 18:36:49.131050   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.crt: {Name:mk5c59c893e49c375a6ab761487cc225357b6856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.131236   51259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.key ...
	I0802 18:36:49.131248   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.key: {Name:mk4bc96b4ef670bd7861b3301f8fe9239292008b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.131333   51259 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key.c7569576
	I0802 18:36:49.131344   51259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt.c7569576 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.201]
	I0802 18:36:49.423749   51259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt.c7569576 ...
	I0802 18:36:49.423763   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt.c7569576: {Name:mk104969b431e32fe293bdddd469a9c7320e89c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.423927   51259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key.c7569576 ...
	I0802 18:36:49.423935   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key.c7569576: {Name:mk5ec80321e7ca5d5852d32bd06da5aae4c6d9a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.424008   51259 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt.c7569576 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt
	I0802 18:36:49.424091   51259 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key.c7569576 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key
	I0802 18:36:49.424143   51259 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.key
	I0802 18:36:49.424153   51259 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.crt with IP's: []
	I0802 18:36:49.634369   51259 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.crt ...
	I0802 18:36:49.634383   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.crt: {Name:mkff097185d860903576931ebf8c3bf55f706f29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.634544   51259 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.key ...
	I0802 18:36:49.634552   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.key: {Name:mkf3da247d369553d8bcddd98b03fc90c30bbd03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:36:49.634717   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:36:49.634745   51259 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:36:49.634751   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:36:49.634773   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:36:49.634816   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:36:49.634841   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:36:49.634876   51259 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:36:49.635511   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:36:49.660029   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:36:49.683014   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:36:49.705765   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:36:49.728533   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0802 18:36:49.751687   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 18:36:49.773783   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:36:49.795517   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 18:36:49.817494   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:36:49.844351   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:36:49.869090   51259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:36:49.894507   51259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:36:49.911801   51259 ssh_runner.go:195] Run: openssl version
	I0802 18:36:49.917220   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:36:49.926933   51259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:36:49.931217   51259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:36:49.931264   51259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:36:49.936834   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:36:49.946769   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:36:49.957622   51259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:36:49.961885   51259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:36:49.961924   51259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:36:49.967384   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:36:49.978385   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:36:49.988541   51259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:36:49.992767   51259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:36:49.992809   51259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:36:49.998221   51259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:36:50.008243   51259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:36:50.011870   51259 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 18:36:50.011918   51259 kubeadm.go:392] StartCluster: {Name:cert-expiration-139745 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.3 ClusterName:cert-expiration-139745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:36:50.012015   51259 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:36:50.012065   51259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:36:50.054052   51259 cri.go:89] found id: ""
	I0802 18:36:50.054106   51259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 18:36:50.063861   51259 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 18:36:50.072664   51259 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:36:50.081708   51259 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:36:50.081717   51259 kubeadm.go:157] found existing configuration files:
	
	I0802 18:36:50.081766   51259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:36:50.090912   51259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:36:50.090969   51259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:36:50.099947   51259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:36:50.108606   51259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:36:50.108656   51259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:36:50.117615   51259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:36:50.125856   51259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:36:50.125904   51259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:36:50.134422   51259 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:36:50.143468   51259 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:36:50.143535   51259 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:36:50.152556   51259 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 18:36:50.267472   51259 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 18:36:50.267584   51259 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:36:50.383085   51259 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:36:50.383222   51259 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:36:50.383365   51259 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:36:50.585989   51259 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:36:50.668116   51259 out.go:204]   - Generating certificates and keys ...
	I0802 18:36:50.668246   51259 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:36:50.668302   51259 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:36:50.762364   51259 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 18:36:50.848904   51259 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 18:36:51.034812   51259 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 18:36:51.140679   51259 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 18:36:51.194923   51259 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 18:36:51.195194   51259 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-139745 localhost] and IPs [192.168.61.201 127.0.0.1 ::1]
	I0802 18:36:51.303096   51259 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 18:36:51.303281   51259 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-139745 localhost] and IPs [192.168.61.201 127.0.0.1 ::1]
	I0802 18:36:51.624757   51259 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 18:36:52.056614   51259 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 18:36:52.224864   51259 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 18:36:52.225112   51259 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:36:52.289493   51259 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:36:52.514021   51259 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 18:36:52.635689   51259 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:36:53.026581   51259 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:36:53.084208   51259 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:36:53.084999   51259 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:36:53.090555   51259 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:36:53.367799   51814 start.go:364] duration metric: took 8.455208995s to acquireMachinesLock for "force-systemd-flag-234725"
	I0802 18:36:53.367901   51814 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-234725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-234725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 18:36:53.368033   51814 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 18:36:53.369983   51814 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0802 18:36:53.370201   51814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:36:53.370267   51814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:36:53.386836   51814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0802 18:36:53.387256   51814 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:36:53.387792   51814 main.go:141] libmachine: Using API Version  1
	I0802 18:36:53.387812   51814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:36:53.388190   51814 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:36:53.388389   51814 main.go:141] libmachine: (force-systemd-flag-234725) Calling .GetMachineName
	I0802 18:36:53.388522   51814 main.go:141] libmachine: (force-systemd-flag-234725) Calling .DriverName
	I0802 18:36:53.388655   51814 start.go:159] libmachine.API.Create for "force-systemd-flag-234725" (driver="kvm2")
	I0802 18:36:53.388685   51814 client.go:168] LocalClient.Create starting
	I0802 18:36:53.388715   51814 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 18:36:53.388751   51814 main.go:141] libmachine: Decoding PEM data...
	I0802 18:36:53.388770   51814 main.go:141] libmachine: Parsing certificate...
	I0802 18:36:53.388848   51814 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 18:36:53.388873   51814 main.go:141] libmachine: Decoding PEM data...
	I0802 18:36:53.388890   51814 main.go:141] libmachine: Parsing certificate...
	I0802 18:36:53.388912   51814 main.go:141] libmachine: Running pre-create checks...
	I0802 18:36:53.388935   51814 main.go:141] libmachine: (force-systemd-flag-234725) Calling .PreCreateCheck
	I0802 18:36:53.389302   51814 main.go:141] libmachine: (force-systemd-flag-234725) Calling .GetConfigRaw
	I0802 18:36:53.389750   51814 main.go:141] libmachine: Creating machine...
	I0802 18:36:53.389766   51814 main.go:141] libmachine: (force-systemd-flag-234725) Calling .Create
	I0802 18:36:53.389874   51814 main.go:141] libmachine: (force-systemd-flag-234725) Creating KVM machine...
	I0802 18:36:53.391174   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | found existing default KVM network
	I0802 18:36:53.392551   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:53.392409   51870 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ac:35:15} reservation:<nil>}
	I0802 18:36:53.393933   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:53.393857   51870 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a4850}
	I0802 18:36:53.393963   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | created network xml: 
	I0802 18:36:53.393979   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | <network>
	I0802 18:36:53.393988   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   <name>mk-force-systemd-flag-234725</name>
	I0802 18:36:53.394015   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   <dns enable='no'/>
	I0802 18:36:53.394039   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   
	I0802 18:36:53.394053   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0802 18:36:53.394068   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |     <dhcp>
	I0802 18:36:53.394094   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0802 18:36:53.394132   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |     </dhcp>
	I0802 18:36:53.394146   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   </ip>
	I0802 18:36:53.394156   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG |   
	I0802 18:36:53.394165   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | </network>
	I0802 18:36:53.394181   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | 
	I0802 18:36:53.399578   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | trying to create private KVM network mk-force-systemd-flag-234725 192.168.50.0/24...
	I0802 18:36:53.470300   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | private KVM network mk-force-systemd-flag-234725 192.168.50.0/24 created
	I0802 18:36:53.470333   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725 ...
	I0802 18:36:53.470359   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:53.470268   51870 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:36:53.470377   51814 main.go:141] libmachine: (force-systemd-flag-234725) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 18:36:53.470469   51814 main.go:141] libmachine: (force-systemd-flag-234725) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 18:36:53.712114   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:53.711996   51870 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725/id_rsa...
	I0802 18:36:54.126457   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:54.126305   51870 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725/force-systemd-flag-234725.rawdisk...
	I0802 18:36:54.126515   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Writing magic tar header
	I0802 18:36:54.126535   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Writing SSH key tar header
	I0802 18:36:54.126549   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:54.126451   51870 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725 ...
	I0802 18:36:54.126566   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725
	I0802 18:36:54.126630   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725 (perms=drwx------)
	I0802 18:36:54.126715   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 18:36:54.126739   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 18:36:54.126749   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 18:36:54.126760   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:36:54.126777   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 18:36:54.126786   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 18:36:54.126801   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home/jenkins
	I0802 18:36:54.126817   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Checking permissions on dir: /home
	I0802 18:36:54.126830   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | Skipping /home - not owner
	I0802 18:36:54.126841   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 18:36:54.126854   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 18:36:54.126863   51814 main.go:141] libmachine: (force-systemd-flag-234725) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 18:36:54.126873   51814 main.go:141] libmachine: (force-systemd-flag-234725) Creating domain...
	I0802 18:36:54.128120   51814 main.go:141] libmachine: (force-systemd-flag-234725) define libvirt domain using xml: 
	I0802 18:36:54.128146   51814 main.go:141] libmachine: (force-systemd-flag-234725) <domain type='kvm'>
	I0802 18:36:54.128159   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <name>force-systemd-flag-234725</name>
	I0802 18:36:54.128169   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <memory unit='MiB'>2048</memory>
	I0802 18:36:54.128182   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <vcpu>2</vcpu>
	I0802 18:36:54.128189   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <features>
	I0802 18:36:54.128197   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <acpi/>
	I0802 18:36:54.128205   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <apic/>
	I0802 18:36:54.128221   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <pae/>
	I0802 18:36:54.128238   51814 main.go:141] libmachine: (force-systemd-flag-234725)     
	I0802 18:36:54.128252   51814 main.go:141] libmachine: (force-systemd-flag-234725)   </features>
	I0802 18:36:54.128263   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <cpu mode='host-passthrough'>
	I0802 18:36:54.128273   51814 main.go:141] libmachine: (force-systemd-flag-234725)   
	I0802 18:36:54.128283   51814 main.go:141] libmachine: (force-systemd-flag-234725)   </cpu>
	I0802 18:36:54.128294   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <os>
	I0802 18:36:54.128305   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <type>hvm</type>
	I0802 18:36:54.128320   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <boot dev='cdrom'/>
	I0802 18:36:54.128350   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <boot dev='hd'/>
	I0802 18:36:54.128361   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <bootmenu enable='no'/>
	I0802 18:36:54.128368   51814 main.go:141] libmachine: (force-systemd-flag-234725)   </os>
	I0802 18:36:54.128376   51814 main.go:141] libmachine: (force-systemd-flag-234725)   <devices>
	I0802 18:36:54.128383   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <disk type='file' device='cdrom'>
	I0802 18:36:54.128401   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725/boot2docker.iso'/>
	I0802 18:36:54.128409   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <target dev='hdc' bus='scsi'/>
	I0802 18:36:54.128415   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <readonly/>
	I0802 18:36:54.128423   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </disk>
	I0802 18:36:54.128429   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <disk type='file' device='disk'>
	I0802 18:36:54.128435   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 18:36:54.128446   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/force-systemd-flag-234725/force-systemd-flag-234725.rawdisk'/>
	I0802 18:36:54.128451   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <target dev='hda' bus='virtio'/>
	I0802 18:36:54.128456   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </disk>
	I0802 18:36:54.128461   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <interface type='network'>
	I0802 18:36:54.128467   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <source network='mk-force-systemd-flag-234725'/>
	I0802 18:36:54.128477   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <model type='virtio'/>
	I0802 18:36:54.128482   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </interface>
	I0802 18:36:54.128487   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <interface type='network'>
	I0802 18:36:54.128519   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <source network='default'/>
	I0802 18:36:54.128547   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <model type='virtio'/>
	I0802 18:36:54.128565   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </interface>
	I0802 18:36:54.128586   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <serial type='pty'>
	I0802 18:36:54.128596   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <target port='0'/>
	I0802 18:36:54.128607   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </serial>
	I0802 18:36:54.128616   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <console type='pty'>
	I0802 18:36:54.128627   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <target type='serial' port='0'/>
	I0802 18:36:54.128636   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </console>
	I0802 18:36:54.128646   51814 main.go:141] libmachine: (force-systemd-flag-234725)     <rng model='virtio'>
	I0802 18:36:54.128655   51814 main.go:141] libmachine: (force-systemd-flag-234725)       <backend model='random'>/dev/random</backend>
	I0802 18:36:54.128665   51814 main.go:141] libmachine: (force-systemd-flag-234725)     </rng>
	I0802 18:36:54.128674   51814 main.go:141] libmachine: (force-systemd-flag-234725)     
	I0802 18:36:54.128686   51814 main.go:141] libmachine: (force-systemd-flag-234725)     
	I0802 18:36:54.128695   51814 main.go:141] libmachine: (force-systemd-flag-234725)   </devices>
	I0802 18:36:54.128707   51814 main.go:141] libmachine: (force-systemd-flag-234725) </domain>
	I0802 18:36:54.128718   51814 main.go:141] libmachine: (force-systemd-flag-234725) 
	I0802 18:36:54.134006   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:14:d3:d3 in network default
	I0802 18:36:54.134707   51814 main.go:141] libmachine: (force-systemd-flag-234725) Ensuring networks are active...
	I0802 18:36:54.134763   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:54.135665   51814 main.go:141] libmachine: (force-systemd-flag-234725) Ensuring network default is active
	I0802 18:36:54.136083   51814 main.go:141] libmachine: (force-systemd-flag-234725) Ensuring network mk-force-systemd-flag-234725 is active
	I0802 18:36:54.136786   51814 main.go:141] libmachine: (force-systemd-flag-234725) Getting domain xml...
	I0802 18:36:54.137681   51814 main.go:141] libmachine: (force-systemd-flag-234725) Creating domain...
	I0802 18:36:53.092163   51259 out.go:204]   - Booting up control plane ...
	I0802 18:36:53.092286   51259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:36:53.092375   51259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:36:53.093044   51259 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:36:53.113780   51259 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:36:53.114472   51259 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:36:53.114521   51259 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:36:53.263848   51259 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 18:36:53.263937   51259 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 18:36:53.765360   51259 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.772894ms
	I0802 18:36:53.765493   51259 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 18:36:53.116134   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:36:53.116161   51349 machine.go:97] duration metric: took 9.298960301s to provisionDockerMachine
	I0802 18:36:53.116175   51349 start.go:293] postStartSetup for "pause-455569" (driver="kvm2")
	I0802 18:36:53.116189   51349 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:36:53.116209   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.116697   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:36:53.116735   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.120256   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.120750   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.120785   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.120988   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.121169   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.121333   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.121531   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:53.213159   51349 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:36:53.217372   51349 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:36:53.217398   51349 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:36:53.217466   51349 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:36:53.217586   51349 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:36:53.217733   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:36:53.226713   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:36:53.253754   51349 start.go:296] duration metric: took 137.564126ms for postStartSetup
	I0802 18:36:53.253799   51349 fix.go:56] duration metric: took 9.461883705s for fixHost
	I0802 18:36:53.253823   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.256858   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.257245   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.257275   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.257499   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.257745   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.257961   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.258127   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.258342   51349 main.go:141] libmachine: Using SSH client type: native
	I0802 18:36:53.258577   51349 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.26 22 <nil> <nil>}
	I0802 18:36:53.258593   51349 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 18:36:53.367640   51349 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722623813.363097160
	
	I0802 18:36:53.367663   51349 fix.go:216] guest clock: 1722623813.363097160
	I0802 18:36:53.367670   51349 fix.go:229] Guest: 2024-08-02 18:36:53.36309716 +0000 UTC Remote: 2024-08-02 18:36:53.253804237 +0000 UTC m=+36.822748293 (delta=109.292923ms)
	I0802 18:36:53.367690   51349 fix.go:200] guest clock delta is within tolerance: 109.292923ms
	I0802 18:36:53.367695   51349 start.go:83] releasing machines lock for "pause-455569", held for 9.575807071s
	I0802 18:36:53.367715   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.367973   51349 main.go:141] libmachine: (pause-455569) Calling .GetIP
	I0802 18:36:53.371290   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.371672   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.371701   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.371823   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.372414   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.372642   51349 main.go:141] libmachine: (pause-455569) Calling .DriverName
	I0802 18:36:53.372726   51349 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:36:53.372772   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.372847   51349 ssh_runner.go:195] Run: cat /version.json
	I0802 18:36:53.372869   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHHostname
	I0802 18:36:53.375636   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.375853   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.376027   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.376055   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.376189   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.376279   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:36:53.376308   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:36:53.376345   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.376486   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHPort
	I0802 18:36:53.376531   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.376619   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHKeyPath
	I0802 18:36:53.376721   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:53.376782   51349 main.go:141] libmachine: (pause-455569) Calling .GetSSHUsername
	I0802 18:36:53.376901   51349 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/pause-455569/id_rsa Username:docker}
	I0802 18:36:53.468342   51349 ssh_runner.go:195] Run: systemctl --version
	I0802 18:36:53.496584   51349 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:36:53.679352   51349 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:36:53.689060   51349 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:36:53.689143   51349 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:36:53.711143   51349 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0802 18:36:53.711175   51349 start.go:495] detecting cgroup driver to use...
	I0802 18:36:53.711255   51349 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:36:53.744892   51349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:36:53.762781   51349 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:36:53.762845   51349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:36:53.789046   51349 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:36:53.916327   51349 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:36:54.170102   51349 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:36:54.365200   51349 docker.go:233] disabling docker service ...
	I0802 18:36:54.365285   51349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:36:54.422189   51349 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:36:54.457595   51349 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:36:54.713424   51349 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:36:55.135741   51349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:36:55.167916   51349 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:36:55.233366   51349 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 18:36:55.233436   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.260193   51349 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:36:55.260274   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.275616   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.298506   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.314564   51349 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:36:55.335754   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.358202   51349 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.375757   51349 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:36:55.391753   51349 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:36:55.404660   51349 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:36:55.416981   51349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:36:55.675813   51349 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:36:59.266390   51259 kubeadm.go:310] [api-check] The API server is healthy after 5.502329671s
	I0802 18:36:59.280158   51259 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 18:36:59.298888   51259 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 18:36:59.336624   51259 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 18:36:59.336882   51259 kubeadm.go:310] [mark-control-plane] Marking the node cert-expiration-139745 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 18:36:59.351771   51259 kubeadm.go:310] [bootstrap-token] Using token: e2ysyn.f25ty5cly7qgtp0x
	I0802 18:36:55.447526   51814 main.go:141] libmachine: (force-systemd-flag-234725) Waiting to get IP...
	I0802 18:36:55.448438   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:55.448884   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:55.448924   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:55.448867   51870 retry.go:31] will retry after 256.074668ms: waiting for machine to come up
	I0802 18:36:55.706466   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:55.707142   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:55.707171   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:55.707051   51870 retry.go:31] will retry after 249.772964ms: waiting for machine to come up
	I0802 18:36:55.958640   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:55.959129   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:55.959162   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:55.959057   51870 retry.go:31] will retry after 397.047934ms: waiting for machine to come up
	I0802 18:36:56.357642   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:56.358143   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:56.358176   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:56.358080   51870 retry.go:31] will retry after 527.244851ms: waiting for machine to come up
	I0802 18:36:56.886666   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:56.887129   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:56.887158   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:56.887080   51870 retry.go:31] will retry after 681.858186ms: waiting for machine to come up
	I0802 18:36:57.570375   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:57.570911   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:57.570940   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:57.570862   51870 retry.go:31] will retry after 701.988959ms: waiting for machine to come up
	I0802 18:36:58.274839   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:58.275360   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:58.275391   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:58.275266   51870 retry.go:31] will retry after 1.087546581s: waiting for machine to come up
	I0802 18:36:59.363944   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:36:59.364433   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:36:59.364463   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:36:59.364390   51870 retry.go:31] will retry after 907.645437ms: waiting for machine to come up
	I0802 18:36:59.354099   51259 out.go:204]   - Configuring RBAC rules ...
	I0802 18:36:59.354249   51259 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 18:36:59.367765   51259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 18:36:59.376862   51259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 18:36:59.383282   51259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 18:36:59.390269   51259 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 18:36:59.395248   51259 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 18:36:59.671829   51259 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 18:37:00.116739   51259 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 18:37:00.674056   51259 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 18:37:00.675555   51259 kubeadm.go:310] 
	I0802 18:37:00.675641   51259 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 18:37:00.675645   51259 kubeadm.go:310] 
	I0802 18:37:00.675726   51259 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 18:37:00.675730   51259 kubeadm.go:310] 
	I0802 18:37:00.675749   51259 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 18:37:00.675800   51259 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 18:37:00.675840   51259 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 18:37:00.675843   51259 kubeadm.go:310] 
	I0802 18:37:00.675884   51259 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 18:37:00.675887   51259 kubeadm.go:310] 
	I0802 18:37:00.675958   51259 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 18:37:00.675965   51259 kubeadm.go:310] 
	I0802 18:37:00.676017   51259 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 18:37:00.676125   51259 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 18:37:00.676222   51259 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 18:37:00.676230   51259 kubeadm.go:310] 
	I0802 18:37:00.676312   51259 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 18:37:00.676393   51259 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 18:37:00.676398   51259 kubeadm.go:310] 
	I0802 18:37:00.676504   51259 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e2ysyn.f25ty5cly7qgtp0x \
	I0802 18:37:00.676631   51259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 18:37:00.676660   51259 kubeadm.go:310] 	--control-plane 
	I0802 18:37:00.676665   51259 kubeadm.go:310] 
	I0802 18:37:00.676782   51259 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 18:37:00.676804   51259 kubeadm.go:310] 
	I0802 18:37:00.676893   51259 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e2ysyn.f25ty5cly7qgtp0x \
	I0802 18:37:00.677011   51259 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 18:37:00.677342   51259 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 18:37:00.677360   51259 cni.go:84] Creating CNI manager for ""
	I0802 18:37:00.677368   51259 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:37:00.679214   51259 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 18:37:00.680698   51259 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 18:37:00.691896   51259 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 18:37:00.714792   51259 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 18:37:00.714854   51259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 18:37:00.714908   51259 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-139745 minikube.k8s.io/updated_at=2024_08_02T18_37_00_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=cert-expiration-139745 minikube.k8s.io/primary=true
	I0802 18:37:00.728735   51259 ops.go:34] apiserver oom_adj: -16
	I0802 18:37:00.889036   51259 kubeadm.go:1113] duration metric: took 174.252113ms to wait for elevateKubeSystemPrivileges
	I0802 18:37:00.910885   51259 kubeadm.go:394] duration metric: took 10.898962331s to StartCluster
	I0802 18:37:00.910919   51259 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:37:00.911007   51259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:37:00.912552   51259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:37:00.912806   51259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 18:37:00.912832   51259 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 18:37:00.912897   51259 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 18:37:00.912948   51259 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-139745"
	I0802 18:37:00.912965   51259 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-139745"
	I0802 18:37:00.912978   51259 addons.go:234] Setting addon storage-provisioner=true in "cert-expiration-139745"
	I0802 18:37:00.913011   51259 host.go:66] Checking if "cert-expiration-139745" exists ...
	I0802 18:37:00.913021   51259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-139745"
	I0802 18:37:00.913034   51259 config.go:182] Loaded profile config "cert-expiration-139745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:37:00.913493   51259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:37:00.913509   51259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:37:00.913532   51259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:37:00.913612   51259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:37:00.914344   51259 out.go:177] * Verifying Kubernetes components...
	I0802 18:37:00.915787   51259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:37:00.929261   51259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35497
	I0802 18:37:00.929685   51259 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:37:00.930225   51259 main.go:141] libmachine: Using API Version  1
	I0802 18:37:00.930244   51259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:37:00.930584   51259 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:37:00.930827   51259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0802 18:37:00.931205   51259 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:37:00.931200   51259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:37:00.931236   51259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:37:00.931669   51259 main.go:141] libmachine: Using API Version  1
	I0802 18:37:00.931686   51259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:37:00.932012   51259 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:37:00.932245   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetState
	I0802 18:37:00.935351   51259 addons.go:234] Setting addon default-storageclass=true in "cert-expiration-139745"
	I0802 18:37:00.935374   51259 host.go:66] Checking if "cert-expiration-139745" exists ...
	I0802 18:37:00.935630   51259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:37:00.935655   51259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:37:00.946174   51259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
	I0802 18:37:00.946670   51259 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:37:00.947096   51259 main.go:141] libmachine: Using API Version  1
	I0802 18:37:00.947130   51259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:37:00.947663   51259 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:37:00.947853   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetState
	I0802 18:37:00.949599   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:37:00.951263   51259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46773
	I0802 18:37:00.951344   51259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:36:57.976373   48425 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:36:57.976489   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:36:57.976733   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:37:00.951693   51259 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:37:00.952211   51259 main.go:141] libmachine: Using API Version  1
	I0802 18:37:00.952228   51259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:37:00.952542   51259 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:37:00.952679   51259 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 18:37:00.952688   51259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 18:37:00.952704   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:37:00.953122   51259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:37:00.953152   51259 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:37:00.955744   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:37:00.956138   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:37:00.956150   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:37:00.956305   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:37:00.956476   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:37:00.956626   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:37:00.956767   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:37:00.969135   51259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41913
	I0802 18:37:00.969469   51259 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:37:00.969927   51259 main.go:141] libmachine: Using API Version  1
	I0802 18:37:00.969937   51259 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:37:00.970233   51259 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:37:00.970393   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetState
	I0802 18:37:00.972266   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .DriverName
	I0802 18:37:00.972464   51259 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 18:37:00.972471   51259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 18:37:00.972483   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHHostname
	I0802 18:37:00.975236   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:37:00.975637   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:71:54", ip: ""} in network mk-cert-expiration-139745: {Iface:virbr1 ExpiryTime:2024-08-02 19:36:34 +0000 UTC Type:0 Mac:52:54:00:ee:71:54 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:cert-expiration-139745 Clientid:01:52:54:00:ee:71:54}
	I0802 18:37:00.975667   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | domain cert-expiration-139745 has defined IP address 192.168.61.201 and MAC address 52:54:00:ee:71:54 in network mk-cert-expiration-139745
	I0802 18:37:00.975856   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHPort
	I0802 18:37:00.976056   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHKeyPath
	I0802 18:37:00.976198   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .GetSSHUsername
	I0802 18:37:00.976321   51259 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/cert-expiration-139745/id_rsa Username:docker}
	I0802 18:37:01.170401   51259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:37:01.170443   51259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 18:37:01.218246   51259 api_server.go:52] waiting for apiserver process to appear ...
	I0802 18:37:01.218292   51259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:37:01.262617   51259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 18:37:01.291830   51259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 18:37:01.568357   51259 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0802 18:37:01.568404   51259 api_server.go:72] duration metric: took 655.54224ms to wait for apiserver process to appear ...
	I0802 18:37:01.568419   51259 api_server.go:88] waiting for apiserver healthz status ...
	I0802 18:37:01.568438   51259 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0802 18:37:01.568467   51259 main.go:141] libmachine: Making call to close driver server
	I0802 18:37:01.568479   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .Close
	I0802 18:37:01.568794   51259 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:37:01.568816   51259 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:37:01.568818   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Closing plugin on server side
	I0802 18:37:01.568823   51259 main.go:141] libmachine: Making call to close driver server
	I0802 18:37:01.568831   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .Close
	I0802 18:37:01.569067   51259 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:37:01.569075   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Closing plugin on server side
	I0802 18:37:01.569084   51259 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:37:01.578878   51259 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0802 18:37:01.584458   51259 api_server.go:141] control plane version: v1.30.3
	I0802 18:37:01.584473   51259 api_server.go:131] duration metric: took 16.048964ms to wait for apiserver health ...
	I0802 18:37:01.584480   51259 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 18:37:01.592685   51259 main.go:141] libmachine: Making call to close driver server
	I0802 18:37:01.592699   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .Close
	I0802 18:37:01.593089   51259 main.go:141] libmachine: (cert-expiration-139745) DBG | Closing plugin on server side
	I0802 18:37:01.593107   51259 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:37:01.593116   51259 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:37:01.593705   51259 system_pods.go:59] 4 kube-system pods found
	I0802 18:37:01.593721   51259 system_pods.go:61] "etcd-cert-expiration-139745" [dde8f282-e341-48cb-9897-069e1c320ecb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0802 18:37:01.593728   51259 system_pods.go:61] "kube-apiserver-cert-expiration-139745" [744503b5-e26d-4bda-9636-dfdedbc526b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0802 18:37:01.593733   51259 system_pods.go:61] "kube-controller-manager-cert-expiration-139745" [639bab33-005b-4205-a54a-7a4e0ff3f1c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0802 18:37:01.593739   51259 system_pods.go:61] "kube-scheduler-cert-expiration-139745" [80f8cbdd-b8b3-4fb6-b8f7-8543165e7fd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0802 18:37:01.593744   51259 system_pods.go:74] duration metric: took 9.259244ms to wait for pod list to return data ...
	I0802 18:37:01.593751   51259 kubeadm.go:582] duration metric: took 680.897267ms to wait for: map[apiserver:true system_pods:true]
	I0802 18:37:01.593761   51259 node_conditions.go:102] verifying NodePressure condition ...
	I0802 18:37:01.598810   51259 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 18:37:01.598827   51259 node_conditions.go:123] node cpu capacity is 2
	I0802 18:37:01.598837   51259 node_conditions.go:105] duration metric: took 5.071367ms to run NodePressure ...
	I0802 18:37:01.598850   51259 start.go:241] waiting for startup goroutines ...
	I0802 18:37:01.771846   51259 main.go:141] libmachine: Making call to close driver server
	I0802 18:37:01.771862   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .Close
	I0802 18:37:01.772136   51259 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:37:01.772148   51259 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:37:01.772158   51259 main.go:141] libmachine: Making call to close driver server
	I0802 18:37:01.772166   51259 main.go:141] libmachine: (cert-expiration-139745) Calling .Close
	I0802 18:37:01.772389   51259 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:37:01.772400   51259 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:37:01.773978   51259 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0802 18:37:01.775067   51259 addons.go:510] duration metric: took 862.167199ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0802 18:37:02.072855   51259 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-139745" context rescaled to 1 replicas
	I0802 18:37:02.072881   51259 start.go:246] waiting for cluster config update ...
	I0802 18:37:02.072890   51259 start.go:255] writing updated cluster config ...
	I0802 18:37:02.073136   51259 ssh_runner.go:195] Run: rm -f paused
	I0802 18:37:02.118824   51259 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 18:37:02.120763   51259 out.go:177] * Done! kubectl is now configured to use "cert-expiration-139745" cluster and "default" namespace by default
	I0802 18:37:00.273413   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:37:00.273843   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:37:00.273866   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:37:00.273797   51870 retry.go:31] will retry after 1.200432562s: waiting for machine to come up
	I0802 18:37:01.476140   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:37:01.476617   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:37:01.476646   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:37:01.476562   51870 retry.go:31] will retry after 2.291414721s: waiting for machine to come up
	I0802 18:37:03.769330   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:37:03.769860   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:37:03.769888   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:37:03.769797   51870 retry.go:31] will retry after 2.203601404s: waiting for machine to come up
	I0802 18:37:05.974875   51349 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.29902423s)
	I0802 18:37:05.974915   51349 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:37:05.974973   51349 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:37:05.979907   51349 start.go:563] Will wait 60s for crictl version
	I0802 18:37:05.979952   51349 ssh_runner.go:195] Run: which crictl
	I0802 18:37:05.983635   51349 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:37:06.018370   51349 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:37:06.018446   51349 ssh_runner.go:195] Run: crio --version
	I0802 18:37:06.045659   51349 ssh_runner.go:195] Run: crio --version
	I0802 18:37:06.076497   51349 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 18:37:06.077558   51349 main.go:141] libmachine: (pause-455569) Calling .GetIP
	I0802 18:37:06.080529   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:37:06.080886   51349 main.go:141] libmachine: (pause-455569) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:2b:54", ip: ""} in network mk-pause-455569: {Iface:virbr3 ExpiryTime:2024-08-02 19:35:33 +0000 UTC Type:0 Mac:52:54:00:9d:2b:54 Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:pause-455569 Clientid:01:52:54:00:9d:2b:54}
	I0802 18:37:06.080908   51349 main.go:141] libmachine: (pause-455569) DBG | domain pause-455569 has defined IP address 192.168.39.26 and MAC address 52:54:00:9d:2b:54 in network mk-pause-455569
	I0802 18:37:06.081163   51349 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 18:37:06.085397   51349 kubeadm.go:883] updating cluster {Name:pause-455569 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-455569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:37:06.085545   51349 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:37:06.085616   51349 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:37:06.126311   51349 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:37:06.126333   51349 crio.go:433] Images already preloaded, skipping extraction
	I0802 18:37:06.126380   51349 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:37:06.163548   51349 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:37:06.163583   51349 cache_images.go:84] Images are preloaded, skipping loading
	I0802 18:37:06.163593   51349 kubeadm.go:934] updating node { 192.168.39.26 8443 v1.30.3 crio true true} ...
	I0802 18:37:06.163744   51349 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-455569 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-455569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:37:06.163835   51349 ssh_runner.go:195] Run: crio config
	I0802 18:37:06.216364   51349 cni.go:84] Creating CNI manager for ""
	I0802 18:37:06.216384   51349 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:37:06.216394   51349 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:37:06.216413   51349 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.26 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-455569 NodeName:pause-455569 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 18:37:06.216531   51349 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-455569"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:37:06.216591   51349 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 18:37:06.226521   51349 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:37:06.226590   51349 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:37:06.235989   51349 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0802 18:37:06.252699   51349 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:37:06.268779   51349 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0802 18:37:06.285022   51349 ssh_runner.go:195] Run: grep 192.168.39.26	control-plane.minikube.internal$ /etc/hosts
	I0802 18:37:06.288803   51349 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:37:06.424154   51349 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:37:06.439377   51349 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569 for IP: 192.168.39.26
	I0802 18:37:06.439402   51349 certs.go:194] generating shared ca certs ...
	I0802 18:37:06.439421   51349 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:37:06.439597   51349 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:37:06.439652   51349 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:37:06.439661   51349 certs.go:256] generating profile certs ...
	I0802 18:37:06.439745   51349 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/client.key
	I0802 18:37:06.439838   51349 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/apiserver.key.baed76b2
	I0802 18:37:06.439873   51349 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/proxy-client.key
	I0802 18:37:06.440019   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:37:06.440054   51349 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:37:06.440064   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:37:06.440087   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:37:06.440113   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:37:06.440130   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:37:06.440164   51349 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:37:06.440694   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:37:06.465958   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:37:02.977213   48425 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:37:02.977527   48425 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:37:05.974988   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:37:05.975554   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:37:05.975574   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:37:05.975515   51870 retry.go:31] will retry after 2.769051441s: waiting for machine to come up
	I0802 18:37:08.745890   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | domain force-systemd-flag-234725 has defined MAC address 52:54:00:57:0d:31 in network mk-force-systemd-flag-234725
	I0802 18:37:08.746372   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | unable to find current IP address of domain force-systemd-flag-234725 in network mk-force-systemd-flag-234725
	I0802 18:37:08.746405   51814 main.go:141] libmachine: (force-systemd-flag-234725) DBG | I0802 18:37:08.746341   51870 retry.go:31] will retry after 2.778647468s: waiting for machine to come up
	I0802 18:37:06.490272   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:37:06.512930   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:37:06.534670   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0802 18:37:06.557698   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 18:37:06.579821   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:37:06.601422   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/pause-455569/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 18:37:06.624191   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:37:06.646603   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:37:06.672574   51349 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:37:06.695750   51349 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:37:06.711231   51349 ssh_runner.go:195] Run: openssl version
	I0802 18:37:06.716980   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:37:06.727800   51349 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:37:06.732170   51349 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:37:06.732226   51349 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:37:06.738156   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:37:06.747617   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:37:06.757803   51349 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:37:06.761886   51349 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:37:06.761937   51349 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:37:06.767668   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:37:06.776348   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:37:06.786787   51349 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:37:06.790824   51349 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:37:06.790873   51349 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:37:06.796473   51349 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:37:06.806038   51349 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:37:06.810368   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 18:37:06.815765   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 18:37:06.821175   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 18:37:06.826527   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 18:37:06.831536   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 18:37:06.836568   51349 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 18:37:06.841945   51349 kubeadm.go:392] StartCluster: {Name:pause-455569 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-455569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.26 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:37:06.842088   51349 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:37:06.842131   51349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:37:06.876875   51349 cri.go:89] found id: "bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d"
	I0802 18:37:06.876910   51349 cri.go:89] found id: "e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544"
	I0802 18:37:06.876918   51349 cri.go:89] found id: "3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8"
	I0802 18:37:06.876924   51349 cri.go:89] found id: "d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe"
	I0802 18:37:06.876929   51349 cri.go:89] found id: "c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7"
	I0802 18:37:06.876935   51349 cri.go:89] found id: "64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6"
	I0802 18:37:06.876939   51349 cri.go:89] found id: "cd4c6565542c91adb90cecb787b79f87939fdb0e03a0aa9dad1a1f778becdbc4"
	I0802 18:37:06.876944   51349 cri.go:89] found id: "51defafa540f57928366e7d3101908daa839051eb51c6250f5aefe9a4af1e3ee"
	I0802 18:37:06.876949   51349 cri.go:89] found id: "1457c2f2941eafeeaa86f8cf787a8da01a73f949da71a1a6ef8af37ac63ffd85"
	I0802 18:37:06.876958   51349 cri.go:89] found id: "b83d690b8c4f1408d97e336b93e91b91bf371aefc601b1793a7485e785665d18"
	I0802 18:37:06.876963   51349 cri.go:89] found id: "e5647b8714ff3460a485e6cdd00b03f7d8ff47b859819cb0aa43fca94682d24e"
	I0802 18:37:06.876967   51349 cri.go:89] found id: "56f59a67c271d9a0dc015537492509698838cb31b03a4e2b6de0c56b92bab8b2"
	I0802 18:37:06.876972   51349 cri.go:89] found id: ""
	I0802 18:37:06.877032   51349 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.864236782Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ac9420bcf84288dcd3d4c1ef447dc7f7e431db9255ee0ad86f217b875ff0a68f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-5ffnn,Uid:201bc75b-6530-4c5b-8fc6-ae08db2bcf12,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722623833024132456,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T18:37:12.619976819Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2101f4df6236b862ca285935397cb60a5375e11282d493d4f4d2619f5b09f8ca,Metadata:&PodSandboxMetadata{Name:kube-proxy-b4mf7,Uid:22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1722623832951342040,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T18:37:12.619987240Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dbd34b845ef400f96156cff03f38b265dbcd043d5c363b10fd2537dc4003fc38,Metadata:&PodSandboxMetadata{Name:etcd-pause-455569,Uid:dd70b4af1f21d296a10445f25a0431af,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722623829116092141,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.39.26:2379,kubernetes.io/config.hash: dd70b4af1f21d296a10445f25a0431af,kubernetes.io/config.seen: 2024-08-02T18:37:08.620799181Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b85a4b7c57f52f1f50723803ccc0dd1809b12a15e42683bf332ff8dc3e05a0dc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-455569,Uid:59d36393c6d3cc00baaad9eefe8d2b47,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722623829104599672,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 59d36393c6d3cc00baaad9eefe8d2b47,kubernetes.io/config.seen: 2024-08-02T18:37:08.620801288Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ba9db4d615d05ece5860fe67f7e73d1544
488ea6a7078ec18948fa70281db421,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-455569,Uid:2893a33bc31a1e8eccfadfb90793698b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722623829103227404,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2893a33bc31a1e8eccfadfb90793698b,kubernetes.io/config.seen: 2024-08-02T18:37:08.620795297Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c273d0224b7ab354c7d43e71a89a030161be9549f94b4b5c954caee9b65136a4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-455569,Uid:90f9b6215e37d314780312b920c52725,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722623829099273858,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.26:8443,kubernetes.io/config.hash: 90f9b6215e37d314780312b920c52725,kubernetes.io/config.seen: 2024-08-02T18:37:08.620800390Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:24c3da47f854c4890d9bd1c169cba8c210e6a51815471bd93d1e73a199b4c3ee,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-5ffnn,Uid:201bc75b-6530-4c5b-8fc6-ae08db2bcf12,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813973479371,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/c
onfig.seen: 2024-08-02T18:36:13.171463568Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2aa7a486127f76e1673831093b823d9f953b1a1911eba8be573e75b091112b09,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-455569,Uid:59d36393c6d3cc00baaad9eefe8d2b47,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813764509412,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 59d36393c6d3cc00baaad9eefe8d2b47,kubernetes.io/config.seen: 2024-08-02T18:35:59.377516736Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c77d1417718b6d458802c3de472e0220327844b75642b75b1e13d04980d5c070,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-455569,Uid:90f9b6215e37d314780312b920c5272
5,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813761079142,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.26:8443,kubernetes.io/config.hash: 90f9b6215e37d314780312b920c52725,kubernetes.io/config.seen: 2024-08-02T18:35:59.377515473Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5fa064106599a29cec2fe88172920c45b82207c52fa78625157e779ea5096173,Metadata:&PodSandboxMetadata{Name:etcd-pause-455569,Uid:dd70b4af1f21d296a10445f25a0431af,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813755400517,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.26:2379,kubernetes.io/config.hash: dd70b4af1f21d296a10445f25a0431af,kubernetes.io/config.seen: 2024-08-02T18:35:59.377512257Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:78fd98f44b2e6dd575f2278b3b8102789f6ecee45cd7a1003515ae27f5805bae,Metadata:&PodSandboxMetadata{Name:kube-proxy-b4mf7,Uid:22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813742658948,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-02T18:36:13.063647160Z,kubernetes.i
o/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f81dc08a75a6fc248d3b738f13b85b86624dae4c99e87c0d5d3f3c5be502da45,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-455569,Uid:2893a33bc31a1e8eccfadfb90793698b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722623813698828995,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2893a33bc31a1e8eccfadfb90793698b,kubernetes.io/config.seen: 2024-08-02T18:35:59.377518065Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=230e611b-cdb2-4111-be05-bb2a531d8037 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.865583732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad148a10-14e6-4b71-866f-06990649f5f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.865729137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad148a10-14e6-4b71-866f-06990649f5f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.869567021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc89b19747e3f8b9e24997bb871f8597d77a5e02ad1d578a4eaacda2e00c9fb1,PodSandboxId:ac9420bcf84288dcd3d4c1ef447dc7f7e431db9255ee0ad86f217b875ff0a68f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722623833397503665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc8d2d8519de6da3549ce7a72a948dc9c197ac7db99b9ac0f4c79ca198c10ae,PodSandboxId:2101f4df6236b862ca285935397cb60a5375e11282d493d4f4d2619f5b09f8ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722623833064649040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f420346,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8e18ca5250cd31cab07ac5145c9d44598dabceec35bca2d3fff85a37a2c511,PodSandboxId:dbd34b845ef400f96156cff03f38b265dbcd043d5c363b10fd2537dc4003fc38,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722623829336411642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e5e20af30de66cbfed75a95fa669c0aaa0641deecd2064c8da6edb7f0663ec,PodSandboxId:b85a4b7c57f52f1f50723803ccc0dd1809b12a15e42683bf332ff8dc3e05a0dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722623829279967398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b4
7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716d4ee88cae914140a385f450eb5202f76dc4d1de2c930c6d5ef68c5e3ea46,PodSandboxId:ba9db4d615d05ece5860fe67f7e73d1544488ea6a7078ec18948fa70281db421,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722623829304109053,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3aecacfbf4c58f0b2be72a05b7235529f904e7a84ca65b69e993440259c6f21,PodSandboxId:c273d0224b7ab354c7d43e71a89a030161be9549f94b4b5c954caee9b65136a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722623829256819930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io
.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544,PodSandboxId:78fd98f44b2e6dd575f2278b3b8102789f6ecee45cd7a1003515ae27f5805bae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722623814244089486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f4203
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d,PodSandboxId:24c3da47f854c4890d9bd1c169cba8c210e6a51815471bd93d1e73a199b4c3ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722623814858930023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8,PodSandboxId:f81dc08a75a6fc248d3b738f13b85b86624dae4c99e87c0d5d3f3c5be502da45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722623814221669668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7,PodSandboxId:c77d1417718b6d458802c3de472e0220327844b75642b75b1e13d04980d5c070,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722623814138527220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe,PodSandboxId:5fa064106599a29cec2fe88172920c45b82207c52fa78625157e779ea5096173,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722623814168966816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: dd70b4af1f21d296a10445f25a0431af,},Annotations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6,PodSandboxId:2aa7a486127f76e1673831093b823d9f953b1a1911eba8be573e75b091112b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722623814106134163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad148a10-14e6-4b71-866f-06990649f5f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.902249774Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8cbb99b-3437-43e0-b3f1-de9ff9c54a84 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.902380838Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8cbb99b-3437-43e0-b3f1-de9ff9c54a84 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.903755048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3ed5256-e1ae-404d-8073-b331ea913e18 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.904793604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623852904758266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3ed5256-e1ae-404d-8073-b331ea913e18 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.905525203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2bf209c-7d69-4180-890f-5edbf833822f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.905609857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2bf209c-7d69-4180-890f-5edbf833822f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.905900843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc89b19747e3f8b9e24997bb871f8597d77a5e02ad1d578a4eaacda2e00c9fb1,PodSandboxId:ac9420bcf84288dcd3d4c1ef447dc7f7e431db9255ee0ad86f217b875ff0a68f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722623833397503665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc8d2d8519de6da3549ce7a72a948dc9c197ac7db99b9ac0f4c79ca198c10ae,PodSandboxId:2101f4df6236b862ca285935397cb60a5375e11282d493d4f4d2619f5b09f8ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722623833064649040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f420346,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8e18ca5250cd31cab07ac5145c9d44598dabceec35bca2d3fff85a37a2c511,PodSandboxId:dbd34b845ef400f96156cff03f38b265dbcd043d5c363b10fd2537dc4003fc38,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722623829336411642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e5e20af30de66cbfed75a95fa669c0aaa0641deecd2064c8da6edb7f0663ec,PodSandboxId:b85a4b7c57f52f1f50723803ccc0dd1809b12a15e42683bf332ff8dc3e05a0dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722623829279967398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b4
7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716d4ee88cae914140a385f450eb5202f76dc4d1de2c930c6d5ef68c5e3ea46,PodSandboxId:ba9db4d615d05ece5860fe67f7e73d1544488ea6a7078ec18948fa70281db421,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722623829304109053,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3aecacfbf4c58f0b2be72a05b7235529f904e7a84ca65b69e993440259c6f21,PodSandboxId:c273d0224b7ab354c7d43e71a89a030161be9549f94b4b5c954caee9b65136a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722623829256819930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io
.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544,PodSandboxId:78fd98f44b2e6dd575f2278b3b8102789f6ecee45cd7a1003515ae27f5805bae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722623814244089486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f4203
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d,PodSandboxId:24c3da47f854c4890d9bd1c169cba8c210e6a51815471bd93d1e73a199b4c3ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722623814858930023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8,PodSandboxId:f81dc08a75a6fc248d3b738f13b85b86624dae4c99e87c0d5d3f3c5be502da45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722623814221669668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7,PodSandboxId:c77d1417718b6d458802c3de472e0220327844b75642b75b1e13d04980d5c070,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722623814138527220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe,PodSandboxId:5fa064106599a29cec2fe88172920c45b82207c52fa78625157e779ea5096173,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722623814168966816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: dd70b4af1f21d296a10445f25a0431af,},Annotations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6,PodSandboxId:2aa7a486127f76e1673831093b823d9f953b1a1911eba8be573e75b091112b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722623814106134163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2bf209c-7d69-4180-890f-5edbf833822f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.960495001Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ce0c70f-8925-4578-a354-7a4a7b394508 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.960646881Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ce0c70f-8925-4578-a354-7a4a7b394508 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.962354247Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e5579f4-e76e-4398-a4c7-16a5e3a94377 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.962968561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623852962934970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e5579f4-e76e-4398-a4c7-16a5e3a94377 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.964124547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2122629-4a8c-42b5-84cd-88baac37f6c2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.964261300Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2122629-4a8c-42b5-84cd-88baac37f6c2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:32 pause-455569 crio[2709]: time="2024-08-02 18:37:32.964781809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc89b19747e3f8b9e24997bb871f8597d77a5e02ad1d578a4eaacda2e00c9fb1,PodSandboxId:ac9420bcf84288dcd3d4c1ef447dc7f7e431db9255ee0ad86f217b875ff0a68f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722623833397503665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc8d2d8519de6da3549ce7a72a948dc9c197ac7db99b9ac0f4c79ca198c10ae,PodSandboxId:2101f4df6236b862ca285935397cb60a5375e11282d493d4f4d2619f5b09f8ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722623833064649040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f420346,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8e18ca5250cd31cab07ac5145c9d44598dabceec35bca2d3fff85a37a2c511,PodSandboxId:dbd34b845ef400f96156cff03f38b265dbcd043d5c363b10fd2537dc4003fc38,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722623829336411642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e5e20af30de66cbfed75a95fa669c0aaa0641deecd2064c8da6edb7f0663ec,PodSandboxId:b85a4b7c57f52f1f50723803ccc0dd1809b12a15e42683bf332ff8dc3e05a0dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722623829279967398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b4
7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716d4ee88cae914140a385f450eb5202f76dc4d1de2c930c6d5ef68c5e3ea46,PodSandboxId:ba9db4d615d05ece5860fe67f7e73d1544488ea6a7078ec18948fa70281db421,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722623829304109053,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3aecacfbf4c58f0b2be72a05b7235529f904e7a84ca65b69e993440259c6f21,PodSandboxId:c273d0224b7ab354c7d43e71a89a030161be9549f94b4b5c954caee9b65136a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722623829256819930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io
.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544,PodSandboxId:78fd98f44b2e6dd575f2278b3b8102789f6ecee45cd7a1003515ae27f5805bae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722623814244089486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f4203
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d,PodSandboxId:24c3da47f854c4890d9bd1c169cba8c210e6a51815471bd93d1e73a199b4c3ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722623814858930023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8,PodSandboxId:f81dc08a75a6fc248d3b738f13b85b86624dae4c99e87c0d5d3f3c5be502da45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722623814221669668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7,PodSandboxId:c77d1417718b6d458802c3de472e0220327844b75642b75b1e13d04980d5c070,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722623814138527220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe,PodSandboxId:5fa064106599a29cec2fe88172920c45b82207c52fa78625157e779ea5096173,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722623814168966816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: dd70b4af1f21d296a10445f25a0431af,},Annotations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6,PodSandboxId:2aa7a486127f76e1673831093b823d9f953b1a1911eba8be573e75b091112b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722623814106134163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2122629-4a8c-42b5-84cd-88baac37f6c2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:33 pause-455569 crio[2709]: time="2024-08-02 18:37:33.007781598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95126341-8af3-4851-807d-22a19338cfe0 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:37:33 pause-455569 crio[2709]: time="2024-08-02 18:37:33.007895583Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95126341-8af3-4851-807d-22a19338cfe0 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:37:33 pause-455569 crio[2709]: time="2024-08-02 18:37:33.009890446Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f445d380-90f6-4430-bc10-c3c3a1da3aaf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:33 pause-455569 crio[2709]: time="2024-08-02 18:37:33.010730108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722623853010550961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f445d380-90f6-4430-bc10-c3c3a1da3aaf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:37:33 pause-455569 crio[2709]: time="2024-08-02 18:37:33.011425028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60314526-58e3-4039-9df8-c01bd8748aa0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:33 pause-455569 crio[2709]: time="2024-08-02 18:37:33.011484717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60314526-58e3-4039-9df8-c01bd8748aa0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:37:33 pause-455569 crio[2709]: time="2024-08-02 18:37:33.011728602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc89b19747e3f8b9e24997bb871f8597d77a5e02ad1d578a4eaacda2e00c9fb1,PodSandboxId:ac9420bcf84288dcd3d4c1ef447dc7f7e431db9255ee0ad86f217b875ff0a68f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722623833397503665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc8d2d8519de6da3549ce7a72a948dc9c197ac7db99b9ac0f4c79ca198c10ae,PodSandboxId:2101f4df6236b862ca285935397cb60a5375e11282d493d4f4d2619f5b09f8ca,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722623833064649040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f420346,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8e18ca5250cd31cab07ac5145c9d44598dabceec35bca2d3fff85a37a2c511,PodSandboxId:dbd34b845ef400f96156cff03f38b265dbcd043d5c363b10fd2537dc4003fc38,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722623829336411642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd70b4af1f21d296a10445f25a0431af,},Annot
ations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e5e20af30de66cbfed75a95fa669c0aaa0641deecd2064c8da6edb7f0663ec,PodSandboxId:b85a4b7c57f52f1f50723803ccc0dd1809b12a15e42683bf332ff8dc3e05a0dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722623829279967398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b4
7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5716d4ee88cae914140a385f450eb5202f76dc4d1de2c930c6d5ef68c5e3ea46,PodSandboxId:ba9db4d615d05ece5860fe67f7e73d1544488ea6a7078ec18948fa70281db421,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722623829304109053,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3aecacfbf4c58f0b2be72a05b7235529f904e7a84ca65b69e993440259c6f21,PodSandboxId:c273d0224b7ab354c7d43e71a89a030161be9549f94b4b5c954caee9b65136a4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722623829256819930,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io
.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544,PodSandboxId:78fd98f44b2e6dd575f2278b3b8102789f6ecee45cd7a1003515ae27f5805bae,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722623814244089486,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4mf7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22b600e8-e5e0-4602-adf4-a37b0b8a6dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 4f4203
46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d,PodSandboxId:24c3da47f854c4890d9bd1c169cba8c210e6a51815471bd93d1e73a199b4c3ee,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722623814858930023,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5ffnn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 201bc75b-6530-4c5b-8fc6-ae08db2bcf12,},Annotations:map[string]string{io.kubernetes.container.hash: b5b4836b,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8,PodSandboxId:f81dc08a75a6fc248d3b738f13b85b86624dae4c99e87c0d5d3f3c5be502da45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722623814221669668,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pau
se-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2893a33bc31a1e8eccfadfb90793698b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7,PodSandboxId:c77d1417718b6d458802c3de472e0220327844b75642b75b1e13d04980d5c070,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722623814138527220,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-455569,io.kubern
etes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f9b6215e37d314780312b920c52725,},Annotations:map[string]string{io.kubernetes.container.hash: f5850113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe,PodSandboxId:5fa064106599a29cec2fe88172920c45b82207c52fa78625157e779ea5096173,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722623814168966816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: dd70b4af1f21d296a10445f25a0431af,},Annotations:map[string]string{io.kubernetes.container.hash: 9e853d7b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6,PodSandboxId:2aa7a486127f76e1673831093b823d9f953b1a1911eba8be573e75b091112b09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722623814106134163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-455569,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 59d36393c6d3cc00baaad9eefe8d2b47,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60314526-58e3-4039-9df8-c01bd8748aa0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cc89b19747e3f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   ac9420bcf8428       coredns-7db6d8ff4d-5ffnn
	dcc8d2d8519de       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   20 seconds ago      Running             kube-proxy                2                   2101f4df6236b       kube-proxy-b4mf7
	2c8e18ca5250c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago      Running             etcd                      2                   dbd34b845ef40       etcd-pause-455569
	5716d4ee88cae       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   23 seconds ago      Running             kube-scheduler            2                   ba9db4d615d05       kube-scheduler-pause-455569
	44e5e20af30de       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   23 seconds ago      Running             kube-controller-manager   2                   b85a4b7c57f52       kube-controller-manager-pause-455569
	c3aecacfbf4c5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   23 seconds ago      Running             kube-apiserver            2                   c273d0224b7ab       kube-apiserver-pause-455569
	bb1163e84ba44       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   38 seconds ago      Exited              coredns                   1                   24c3da47f854c       coredns-7db6d8ff4d-5ffnn
	e474aad35defa       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   38 seconds ago      Exited              kube-proxy                1                   78fd98f44b2e6       kube-proxy-b4mf7
	3d8d0760aafd3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   38 seconds ago      Exited              kube-scheduler            1                   f81dc08a75a6f       kube-scheduler-pause-455569
	d17d2954528e5       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   38 seconds ago      Exited              etcd                      1                   5fa064106599a       etcd-pause-455569
	c767e060079f5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   38 seconds ago      Exited              kube-apiserver            1                   c77d1417718b6       kube-apiserver-pause-455569
	64a6eabb02ce1       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   38 seconds ago      Exited              kube-controller-manager   1                   2aa7a486127f7       kube-controller-manager-pause-455569
	
	
	==> coredns [bb1163e84ba44e1a1285dd5ecb81c9b0dab83d5bf4fa9a0822433c768c1f6e9d] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:34809 - 41704 "HINFO IN 110192597553160868.684303896014414589. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.009315332s
	
	
	==> coredns [cc89b19747e3f8b9e24997bb871f8597d77a5e02ad1d578a4eaacda2e00c9fb1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39731 - 28807 "HINFO IN 4922020822728551846.3276665842435115586. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010589073s
	
	
	==> describe nodes <==
	Name:               pause-455569
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-455569
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=pause-455569
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T18_36_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:35:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-455569
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 18:37:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 18:37:12 +0000   Fri, 02 Aug 2024 18:35:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 18:37:12 +0000   Fri, 02 Aug 2024 18:35:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 18:37:12 +0000   Fri, 02 Aug 2024 18:35:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 18:37:12 +0000   Fri, 02 Aug 2024 18:36:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.26
	  Hostname:    pause-455569
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac767b38fd8c4f5786712313f8649e7f
	  System UUID:                ac767b38-fd8c-4f57-8671-2313f8649e7f
	  Boot ID:                    46d84004-9124-4cb9-bd03-a90321200821
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5ffnn                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     80s
	  kube-system                 etcd-pause-455569                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         94s
	  kube-system                 kube-apiserver-pause-455569             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-controller-manager-pause-455569    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-b4mf7                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-pause-455569             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 79s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     94s                kubelet          Node pause-455569 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  94s                kubelet          Node pause-455569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s                kubelet          Node pause-455569 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 94s                kubelet          Starting kubelet.
	  Normal  NodeReady                93s                kubelet          Node pause-455569 status is now: NodeReady
	  Normal  RegisteredNode           81s                node-controller  Node pause-455569 event: Registered Node pause-455569 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-455569 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-455569 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-455569 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-455569 event: Registered Node pause-455569 in Controller
	
	
	==> dmesg <==
	[ +10.495973] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.057306] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.047911] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.198508] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.123798] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.256233] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +4.187352] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.574879] systemd-fstab-generator[926]: Ignoring "noauto" option for root device
	[  +0.058773] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.991092] systemd-fstab-generator[1267]: Ignoring "noauto" option for root device
	[  +0.088586] kauditd_printk_skb: 69 callbacks suppressed
	[Aug 2 18:36] systemd-fstab-generator[1460]: Ignoring "noauto" option for root device
	[  +0.100129] kauditd_printk_skb: 21 callbacks suppressed
	[ +41.185369] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.294729] systemd-fstab-generator[2281]: Ignoring "noauto" option for root device
	[  +0.189727] systemd-fstab-generator[2349]: Ignoring "noauto" option for root device
	[  +0.323284] systemd-fstab-generator[2522]: Ignoring "noauto" option for root device
	[  +0.328590] systemd-fstab-generator[2575]: Ignoring "noauto" option for root device
	[  +0.599928] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[Aug 2 18:37] systemd-fstab-generator[2959]: Ignoring "noauto" option for root device
	[  +0.081669] kauditd_printk_skb: 173 callbacks suppressed
	[  +1.998248] systemd-fstab-generator[3082]: Ignoring "noauto" option for root device
	[  +4.549181] kauditd_printk_skb: 86 callbacks suppressed
	[ +11.994837] kauditd_printk_skb: 25 callbacks suppressed
	[  +1.711306] systemd-fstab-generator[3839]: Ignoring "noauto" option for root device
	
	
	==> etcd [2c8e18ca5250cd31cab07ac5145c9d44598dabceec35bca2d3fff85a37a2c511] <==
	{"level":"info","ts":"2024-08-02T18:37:24.881004Z","caller":"traceutil/trace.go:171","msg":"trace[547944485] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:460; }","duration":"320.477733ms","start":"2024-08-02T18:37:24.560516Z","end":"2024-08-02T18:37:24.880994Z","steps":["trace[547944485] 'agreement among raft nodes before linearized reading'  (duration: 320.398424ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:24.881032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:24.560508Z","time spent":"320.514823ms","remote":"127.0.0.1:44800","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":231,"request content":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" "}
	{"level":"warn","ts":"2024-08-02T18:37:24.881262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.585416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" ","response":"range_response_count:1 size:6989"}
	{"level":"info","ts":"2024-08-02T18:37:24.881311Z","caller":"traceutil/trace.go:171","msg":"trace[427933947] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-455569; range_end:; response_count:1; response_revision:460; }","duration":"305.703632ms","start":"2024-08-02T18:37:24.575596Z","end":"2024-08-02T18:37:24.8813Z","steps":["trace[427933947] 'agreement among raft nodes before linearized reading'  (duration: 305.555492ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:24.88134Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:24.575579Z","time spent":"305.752445ms","remote":"127.0.0.1:44776","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":7013,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" "}
	{"level":"info","ts":"2024-08-02T18:37:25.276735Z","caller":"traceutil/trace.go:171","msg":"trace[1929792185] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:498; }","duration":"200.96358ms","start":"2024-08-02T18:37:25.075754Z","end":"2024-08-02T18:37:25.276718Z","steps":["trace[1929792185] 'read index received'  (duration: 125.192522ms)","trace[1929792185] 'applied index is now lower than readState.Index'  (duration: 75.770244ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T18:37:25.276994Z","caller":"traceutil/trace.go:171","msg":"trace[1624319750] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"379.380698ms","start":"2024-08-02T18:37:24.897599Z","end":"2024-08-02T18:37:25.276979Z","steps":["trace[1624319750] 'process raft request'  (duration: 303.348627ms)","trace[1624319750] 'compare'  (duration: 75.676245ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:37:25.278679Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:24.897588Z","time spent":"381.033404ms","remote":"127.0.0.1:45074","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:399 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2024-08-02T18:37:25.277165Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.590094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T18:37:25.27887Z","caller":"traceutil/trace.go:171","msg":"trace[1321782796] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:462; }","duration":"115.343118ms","start":"2024-08-02T18:37:25.163514Z","end":"2024-08-02T18:37:25.278857Z","steps":["trace[1321782796] 'agreement among raft nodes before linearized reading'  (duration: 113.58704ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:25.277268Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.527514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" ","response":"range_response_count:1 size:6989"}
	{"level":"info","ts":"2024-08-02T18:37:25.279027Z","caller":"traceutil/trace.go:171","msg":"trace[912927533] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-455569; range_end:; response_count:1; response_revision:462; }","duration":"203.309402ms","start":"2024-08-02T18:37:25.075709Z","end":"2024-08-02T18:37:25.279018Z","steps":["trace[912927533] 'agreement among raft nodes before linearized reading'  (duration: 201.521325ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:25.732107Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.085792ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12938156821228985899 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" mod_revision:391 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" value_size:6912 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-02T18:37:25.732606Z","caller":"traceutil/trace.go:171","msg":"trace[897316173] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"440.273588ms","start":"2024-08-02T18:37:25.292312Z","end":"2024-08-02T18:37:25.732586Z","steps":["trace[897316173] 'process raft request'  (duration: 315.603529ms)","trace[897316173] 'compare'  (duration: 123.981904ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:37:25.732701Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:25.292301Z","time spent":"440.35349ms","remote":"127.0.0.1:44776","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6974,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" mod_revision:391 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" value_size:6912 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" > >"}
	{"level":"warn","ts":"2024-08-02T18:37:26.223398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.860162ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12938156821228985900 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:462 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-02T18:37:26.223498Z","caller":"traceutil/trace.go:171","msg":"trace[174708686] linearizableReadLoop","detail":"{readStateIndex:501; appliedIndex:499; }","duration":"647.779247ms","start":"2024-08-02T18:37:25.575705Z","end":"2024-08-02T18:37:26.223484Z","steps":["trace[174708686] 'read index received'  (duration: 32.219223ms)","trace[174708686] 'applied index is now lower than readState.Index'  (duration: 615.555969ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T18:37:26.223581Z","caller":"traceutil/trace.go:171","msg":"trace[1291130132] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"928.490646ms","start":"2024-08-02T18:37:25.29508Z","end":"2024-08-02T18:37:26.223571Z","steps":["trace[1291130132] 'process raft request'  (duration: 648.378583ms)","trace[1291130132] 'compare'  (duration: 279.675524ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:37:26.223666Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:25.29507Z","time spent":"928.549177ms","remote":"127.0.0.1:45074","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:462 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4069 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2024-08-02T18:37:26.223807Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"648.097919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" ","response":"range_response_count:1 size:6989"}
	{"level":"info","ts":"2024-08-02T18:37:26.223856Z","caller":"traceutil/trace.go:171","msg":"trace[1039103649] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-455569; range_end:; response_count:1; response_revision:464; }","duration":"648.140198ms","start":"2024-08-02T18:37:25.5757Z","end":"2024-08-02T18:37:26.22384Z","steps":["trace[1039103649] 'agreement among raft nodes before linearized reading'  (duration: 648.049326ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:26.223897Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:25.575682Z","time spent":"648.206656ms","remote":"127.0.0.1:44776","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":7013,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" "}
	{"level":"warn","ts":"2024-08-02T18:37:26.224087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"481.284418ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" ","response":"range_response_count:1 size:6989"}
	{"level":"info","ts":"2024-08-02T18:37:26.224132Z","caller":"traceutil/trace.go:171","msg":"trace[916008426] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-455569; range_end:; response_count:1; response_revision:464; }","duration":"481.356026ms","start":"2024-08-02T18:37:25.742767Z","end":"2024-08-02T18:37:26.224123Z","steps":["trace[916008426] 'agreement among raft nodes before linearized reading'  (duration: 481.290039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:37:26.224155Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:37:25.742754Z","time spent":"481.395745ms","remote":"127.0.0.1:44776","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":7013,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-pause-455569\" "}
	
	
	==> etcd [d17d2954528e556a7e229c09d36091541a009339509b42632f04c55c364f5bbe] <==
	{"level":"info","ts":"2024-08-02T18:36:55.011343Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-02T18:36:55.191737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-02T18:36:55.191817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-02T18:36:55.191853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgPreVoteResp from c9867c1935b8b38d at term 2"}
	{"level":"info","ts":"2024-08-02T18:36:55.191873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became candidate at term 3"}
	{"level":"info","ts":"2024-08-02T18:36:55.191884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d received MsgVoteResp from c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-08-02T18:36:55.191901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c9867c1935b8b38d became leader at term 3"}
	{"level":"info","ts":"2024-08-02T18:36:55.191914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c9867c1935b8b38d elected leader c9867c1935b8b38d at term 3"}
	{"level":"info","ts":"2024-08-02T18:36:55.209563Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c9867c1935b8b38d","local-member-attributes":"{Name:pause-455569 ClientURLs:[https://192.168.39.26:2379]}","request-path":"/0/members/c9867c1935b8b38d/attributes","cluster-id":"8cfb77a10e566a07","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-02T18:36:55.209616Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:36:55.210097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:36:55.234681Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.26:2379"}
	{"level":"info","ts":"2024-08-02T18:36:55.241144Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-02T18:36:55.258803Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-02T18:36:55.280868Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-02T18:36:55.701405Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-02T18:36:55.701505Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-455569","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	{"level":"warn","ts":"2024-08-02T18:36:55.701623Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:36:55.701665Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:36:55.707221Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-02T18:36:55.707272Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.26:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-02T18:36:55.707327Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c9867c1935b8b38d","current-leader-member-id":"c9867c1935b8b38d"}
	{"level":"info","ts":"2024-08-02T18:36:55.717073Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-02T18:36:55.721388Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.26:2380"}
	{"level":"info","ts":"2024-08-02T18:36:55.721437Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-455569","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.26:2380"],"advertise-client-urls":["https://192.168.39.26:2379"]}
	
	
	==> kernel <==
	 18:37:33 up 2 min,  0 users,  load average: 0.78, 0.35, 0.13
	Linux pause-455569 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c3aecacfbf4c58f0b2be72a05b7235529f904e7a84ca65b69e993440259c6f21] <==
	I0802 18:37:12.352959       1 policy_source.go:224] refreshing policies
	I0802 18:37:12.395889       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0802 18:37:12.396436       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0802 18:37:12.396529       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0802 18:37:12.396665       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0802 18:37:12.398495       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0802 18:37:12.401277       1 shared_informer.go:320] Caches are synced for configmaps
	I0802 18:37:12.401933       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0802 18:37:12.402236       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0802 18:37:12.409471       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0802 18:37:13.217153       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0802 18:37:13.905410       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0802 18:37:13.919944       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0802 18:37:13.976350       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0802 18:37:14.016250       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0802 18:37:14.024718       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0802 18:37:24.514398       1 controller.go:615] quota admission added evaluator for: endpoints
	I0802 18:37:24.896023       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0802 18:37:26.224909       1 trace.go:236] Trace[1766215622]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:7ace95dc-854f-4c60-9265-92e6ba476608,client:192.168.39.26,api-group:apps,api-version:v1,name:coredns,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:deployments,scope:resource,url:/apis/apps/v1/namespaces/kube-system/deployments/coredns/status,user-agent:kube-controller-manager/v1.30.3 (linux/amd64) kubernetes/6fc0a69/system:serviceaccount:kube-system:deployment-controller,verb:PUT (02-Aug-2024 18:37:25.292) (total time: 932ms):
	Trace[1766215622]: ["GuaranteedUpdate etcd3" audit-id:7ace95dc-854f-4c60-9265-92e6ba476608,key:/deployments/kube-system/coredns,type:*apps.Deployment,resource:deployments.apps 932ms (18:37:25.292)
	Trace[1766215622]:  ---"Txn call completed" 929ms (18:37:26.224)]
	Trace[1766215622]: [932.142619ms] [932.142619ms] END
	I0802 18:37:26.227236       1 trace.go:236] Trace[437600409]: "Get" accept:application/json, */*,audit-id:98f7a1c4-ea54-433c-adb7-9efb2f322d8f,client:192.168.39.1,api-group:,api-version:v1,name:kube-apiserver-pause-455569,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-455569,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (02-Aug-2024 18:37:25.575) (total time: 652ms):
	Trace[437600409]: ---"About to write a response" 650ms (18:37:26.225)
	Trace[437600409]: [652.043182ms] [652.043182ms] END
	
	
	==> kube-apiserver [c767e060079f51a0fe6776f8b9e6d8ae3202e10f615bbef76184e23e859312c7] <==
	I0802 18:36:54.974491       1 options.go:221] external host was not specified, using 192.168.39.26
	I0802 18:36:54.975607       1 server.go:148] Version: v1.30.3
	I0802 18:36:54.975642       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [44e5e20af30de66cbfed75a95fa669c0aaa0641deecd2064c8da6edb7f0663ec] <==
	I0802 18:37:24.586715       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0802 18:37:24.587913       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0802 18:37:24.591274       1 shared_informer.go:320] Caches are synced for TTL
	I0802 18:37:24.595636       1 shared_informer.go:320] Caches are synced for PV protection
	I0802 18:37:24.596863       1 shared_informer.go:320] Caches are synced for node
	I0802 18:37:24.596947       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0802 18:37:24.596978       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0802 18:37:24.596983       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0802 18:37:24.596989       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0802 18:37:24.608648       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0802 18:37:24.615018       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0802 18:37:24.669088       1 shared_informer.go:320] Caches are synced for stateful set
	I0802 18:37:24.673387       1 shared_informer.go:320] Caches are synced for PVC protection
	I0802 18:37:24.698877       1 shared_informer.go:320] Caches are synced for persistent volume
	I0802 18:37:24.701378       1 shared_informer.go:320] Caches are synced for attach detach
	I0802 18:37:24.704035       1 shared_informer.go:320] Caches are synced for ephemeral
	I0802 18:37:24.719804       1 shared_informer.go:320] Caches are synced for expand
	I0802 18:37:24.736166       1 shared_informer.go:320] Caches are synced for resource quota
	I0802 18:37:24.777259       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0802 18:37:24.781403       1 shared_informer.go:320] Caches are synced for cronjob
	I0802 18:37:24.786158       1 shared_informer.go:320] Caches are synced for resource quota
	I0802 18:37:24.805411       1 shared_informer.go:320] Caches are synced for job
	I0802 18:37:25.212530       1 shared_informer.go:320] Caches are synced for garbage collector
	I0802 18:37:25.264284       1 shared_informer.go:320] Caches are synced for garbage collector
	I0802 18:37:25.264441       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [64a6eabb02ce1c612e86787dcbd9e84f94640775afdb49d7ca722eb2eedaaec6] <==
	
	
	==> kube-proxy [dcc8d2d8519de6da3549ce7a72a948dc9c197ac7db99b9ac0f4c79ca198c10ae] <==
	I0802 18:37:13.219744       1 server_linux.go:69] "Using iptables proxy"
	I0802 18:37:13.235745       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.26"]
	I0802 18:37:13.309493       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 18:37:13.309554       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 18:37:13.309574       1 server_linux.go:165] "Using iptables Proxier"
	I0802 18:37:13.315295       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 18:37:13.315475       1 server.go:872] "Version info" version="v1.30.3"
	I0802 18:37:13.315500       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:37:13.316906       1 config.go:192] "Starting service config controller"
	I0802 18:37:13.316945       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 18:37:13.316970       1 config.go:101] "Starting endpoint slice config controller"
	I0802 18:37:13.316974       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 18:37:13.317548       1 config.go:319] "Starting node config controller"
	I0802 18:37:13.317572       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 18:37:13.418038       1 shared_informer.go:320] Caches are synced for node config
	I0802 18:37:13.418248       1 shared_informer.go:320] Caches are synced for service config
	I0802 18:37:13.418272       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [e474aad35defa565f6937df2d5be4e806cc8ab2fce6eaf81546991d325417544] <==
	
	
	==> kube-scheduler [3d8d0760aafd3c5d9f61980df97167b4eac1c59ac058e1feab4e4844c1f53db8] <==
	
	
	==> kube-scheduler [5716d4ee88cae914140a385f450eb5202f76dc4d1de2c930c6d5ef68c5e3ea46] <==
	I0802 18:37:10.184874       1 serving.go:380] Generated self-signed cert in-memory
	W0802 18:37:12.234265       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 18:37:12.234437       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 18:37:12.234467       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 18:37:12.234532       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 18:37:12.296093       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0802 18:37:12.297306       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:37:12.303878       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 18:37:12.303960       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 18:37:12.304798       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 18:37:12.309167       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0802 18:37:12.405099       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848517    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90f9b6215e37d314780312b920c52725-usr-share-ca-certificates\") pod \"kube-apiserver-pause-455569\" (UID: \"90f9b6215e37d314780312b920c52725\") " pod="kube-system/kube-apiserver-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848534    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59d36393c6d3cc00baaad9eefe8d2b47-flexvolume-dir\") pod \"kube-controller-manager-pause-455569\" (UID: \"59d36393c6d3cc00baaad9eefe8d2b47\") " pod="kube-system/kube-controller-manager-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848547    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59d36393c6d3cc00baaad9eefe8d2b47-k8s-certs\") pod \"kube-controller-manager-pause-455569\" (UID: \"59d36393c6d3cc00baaad9eefe8d2b47\") " pod="kube-system/kube-controller-manager-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848564    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59d36393c6d3cc00baaad9eefe8d2b47-kubeconfig\") pod \"kube-controller-manager-pause-455569\" (UID: \"59d36393c6d3cc00baaad9eefe8d2b47\") " pod="kube-system/kube-controller-manager-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848580    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59d36393c6d3cc00baaad9eefe8d2b47-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-455569\" (UID: \"59d36393c6d3cc00baaad9eefe8d2b47\") " pod="kube-system/kube-controller-manager-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.848593    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90f9b6215e37d314780312b920c52725-k8s-certs\") pod \"kube-apiserver-pause-455569\" (UID: \"90f9b6215e37d314780312b920c52725\") " pod="kube-system/kube-apiserver-pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: I0802 18:37:08.945166    3089 kubelet_node_status.go:73] "Attempting to register node" node="pause-455569"
	Aug 02 18:37:08 pause-455569 kubelet[3089]: E0802 18:37:08.946166    3089 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.26:8443: connect: connection refused" node="pause-455569"
	Aug 02 18:37:09 pause-455569 kubelet[3089]: E0802 18:37:09.247935    3089 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-455569?timeout=10s\": dial tcp 192.168.39.26:8443: connect: connection refused" interval="800ms"
	Aug 02 18:37:09 pause-455569 kubelet[3089]: I0802 18:37:09.350348    3089 kubelet_node_status.go:73] "Attempting to register node" node="pause-455569"
	Aug 02 18:37:09 pause-455569 kubelet[3089]: E0802 18:37:09.351793    3089 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.26:8443: connect: connection refused" node="pause-455569"
	Aug 02 18:37:09 pause-455569 kubelet[3089]: W0802 18:37:09.511034    3089 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-455569&limit=500&resourceVersion=0": dial tcp 192.168.39.26:8443: connect: connection refused
	Aug 02 18:37:09 pause-455569 kubelet[3089]: E0802 18:37:09.511099    3089 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-455569&limit=500&resourceVersion=0": dial tcp 192.168.39.26:8443: connect: connection refused
	Aug 02 18:37:10 pause-455569 kubelet[3089]: I0802 18:37:10.153959    3089 kubelet_node_status.go:73] "Attempting to register node" node="pause-455569"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.459855    3089 kubelet_node_status.go:112] "Node was previously registered" node="pause-455569"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.460611    3089 kubelet_node_status.go:76] "Successfully registered node" node="pause-455569"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.462341    3089 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.463814    3089 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.616407    3089 apiserver.go:52] "Watching apiserver"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.620230    3089 topology_manager.go:215] "Topology Admit Handler" podUID="201bc75b-6530-4c5b-8fc6-ae08db2bcf12" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5ffnn"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.620374    3089 topology_manager.go:215] "Topology Admit Handler" podUID="22b600e8-e5e0-4602-adf4-a37b0b8a6dbb" podNamespace="kube-system" podName="kube-proxy-b4mf7"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.645856    3089 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.688450    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22b600e8-e5e0-4602-adf4-a37b0b8a6dbb-lib-modules\") pod \"kube-proxy-b4mf7\" (UID: \"22b600e8-e5e0-4602-adf4-a37b0b8a6dbb\") " pod="kube-system/kube-proxy-b4mf7"
	Aug 02 18:37:12 pause-455569 kubelet[3089]: I0802 18:37:12.688497    3089 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22b600e8-e5e0-4602-adf4-a37b0b8a6dbb-xtables-lock\") pod \"kube-proxy-b4mf7\" (UID: \"22b600e8-e5e0-4602-adf4-a37b0b8a6dbb\") " pod="kube-system/kube-proxy-b4mf7"
	Aug 02 18:37:22 pause-455569 kubelet[3089]: I0802 18:37:22.005661    3089 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:37:32.502059   52338 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19355-5397/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-455569 -n pause-455569
helpers_test.go:261: (dbg) Run:  kubectl --context pause-455569 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (77.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (283.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-490984 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-490984 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m42.784926199s)

                                                
                                                
-- stdout --
	* [old-k8s-version-490984] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-490984" primary control-plane node in "old-k8s-version-490984" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:37:50.039183   54981 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:37:50.039325   54981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:37:50.039337   54981 out.go:304] Setting ErrFile to fd 2...
	I0802 18:37:50.039345   54981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:37:50.039555   54981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:37:50.040108   54981 out.go:298] Setting JSON to false
	I0802 18:37:50.040989   54981 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4814,"bootTime":1722619056,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:37:50.041057   54981 start.go:139] virtualization: kvm guest
	I0802 18:37:50.043339   54981 out.go:177] * [old-k8s-version-490984] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:37:50.044618   54981 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:37:50.044670   54981 notify.go:220] Checking for updates...
	I0802 18:37:50.046948   54981 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:37:50.048196   54981 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:37:50.049633   54981 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:37:50.050926   54981 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:37:50.052152   54981 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:37:50.053759   54981 config.go:182] Loaded profile config "cert-expiration-139745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:37:50.053908   54981 config.go:182] Loaded profile config "cert-options-643429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:37:50.054020   54981 config.go:182] Loaded profile config "kubernetes-upgrade-132946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0802 18:37:50.054122   54981 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:37:50.087307   54981 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 18:37:50.088522   54981 start.go:297] selected driver: kvm2
	I0802 18:37:50.088537   54981 start.go:901] validating driver "kvm2" against <nil>
	I0802 18:37:50.088551   54981 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:37:50.089577   54981 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:37:50.089681   54981 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:37:50.104771   54981 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:37:50.104826   54981 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 18:37:50.105059   54981 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:37:50.105117   54981 cni.go:84] Creating CNI manager for ""
	I0802 18:37:50.105129   54981 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:37:50.105135   54981 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 18:37:50.105193   54981 start.go:340] cluster config:
	{Name:old-k8s-version-490984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:37:50.105288   54981 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:37:50.107064   54981 out.go:177] * Starting "old-k8s-version-490984" primary control-plane node in "old-k8s-version-490984" cluster
	I0802 18:37:50.108478   54981 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0802 18:37:50.108518   54981 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0802 18:37:50.108527   54981 cache.go:56] Caching tarball of preloaded images
	I0802 18:37:50.108618   54981 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:37:50.108630   54981 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0802 18:37:50.108717   54981 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/config.json ...
	I0802 18:37:50.108734   54981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/config.json: {Name:mk60b2005907a376ae67f054ea6420729179d52d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:37:50.108853   54981 start.go:360] acquireMachinesLock for old-k8s-version-490984: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:38:03.911711   54981 start.go:364] duration metric: took 13.802813948s to acquireMachinesLock for "old-k8s-version-490984"
	I0802 18:38:03.911791   54981 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-490984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 18:38:03.911937   54981 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 18:38:03.913911   54981 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 18:38:03.914105   54981 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:38:03.914158   54981 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:38:03.930482   54981 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0802 18:38:03.930888   54981 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:38:03.931457   54981 main.go:141] libmachine: Using API Version  1
	I0802 18:38:03.931480   54981 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:38:03.931847   54981 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:38:03.932041   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetMachineName
	I0802 18:38:03.932180   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:38:03.932339   54981 start.go:159] libmachine.API.Create for "old-k8s-version-490984" (driver="kvm2")
	I0802 18:38:03.932374   54981 client.go:168] LocalClient.Create starting
	I0802 18:38:03.932411   54981 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 18:38:03.932462   54981 main.go:141] libmachine: Decoding PEM data...
	I0802 18:38:03.932493   54981 main.go:141] libmachine: Parsing certificate...
	I0802 18:38:03.932559   54981 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 18:38:03.932583   54981 main.go:141] libmachine: Decoding PEM data...
	I0802 18:38:03.932602   54981 main.go:141] libmachine: Parsing certificate...
	I0802 18:38:03.932627   54981 main.go:141] libmachine: Running pre-create checks...
	I0802 18:38:03.932646   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .PreCreateCheck
	I0802 18:38:03.933000   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetConfigRaw
	I0802 18:38:03.933430   54981 main.go:141] libmachine: Creating machine...
	I0802 18:38:03.933447   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .Create
	I0802 18:38:03.933574   54981 main.go:141] libmachine: (old-k8s-version-490984) Creating KVM machine...
	I0802 18:38:03.934980   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found existing default KVM network
	I0802 18:38:03.936638   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:03.936473   55091 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:10:85:f6} reservation:<nil>}
	I0802 18:38:03.938085   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:03.937980   55091 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a2970}
	I0802 18:38:03.938119   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | created network xml: 
	I0802 18:38:03.938141   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | <network>
	I0802 18:38:03.938150   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG |   <name>mk-old-k8s-version-490984</name>
	I0802 18:38:03.938163   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG |   <dns enable='no'/>
	I0802 18:38:03.938173   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG |   
	I0802 18:38:03.938201   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0802 18:38:03.938221   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG |     <dhcp>
	I0802 18:38:03.938251   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0802 18:38:03.938262   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG |     </dhcp>
	I0802 18:38:03.938274   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG |   </ip>
	I0802 18:38:03.938311   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG |   
	I0802 18:38:03.938327   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | </network>
	I0802 18:38:03.938340   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | 
	I0802 18:38:03.944042   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | trying to create private KVM network mk-old-k8s-version-490984 192.168.50.0/24...
	I0802 18:38:04.018978   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | private KVM network mk-old-k8s-version-490984 192.168.50.0/24 created
	I0802 18:38:04.019014   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:04.018957   55091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:38:04.019061   54981 main.go:141] libmachine: (old-k8s-version-490984) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984 ...
	I0802 18:38:04.019091   54981 main.go:141] libmachine: (old-k8s-version-490984) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 18:38:04.019141   54981 main.go:141] libmachine: (old-k8s-version-490984) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 18:38:04.252738   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:04.252648   55091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa...
	I0802 18:38:04.667478   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:04.667352   55091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/old-k8s-version-490984.rawdisk...
	I0802 18:38:04.667506   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Writing magic tar header
	I0802 18:38:04.667518   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Writing SSH key tar header
	I0802 18:38:04.667569   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:04.667510   55091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984 ...
	I0802 18:38:04.667637   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984
	I0802 18:38:04.667689   54981 main.go:141] libmachine: (old-k8s-version-490984) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984 (perms=drwx------)
	I0802 18:38:04.667720   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 18:38:04.667734   54981 main.go:141] libmachine: (old-k8s-version-490984) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 18:38:04.667745   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:38:04.667761   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 18:38:04.667774   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 18:38:04.667789   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Checking permissions on dir: /home/jenkins
	I0802 18:38:04.667801   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Checking permissions on dir: /home
	I0802 18:38:04.667812   54981 main.go:141] libmachine: (old-k8s-version-490984) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 18:38:04.667821   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Skipping /home - not owner
	I0802 18:38:04.667836   54981 main.go:141] libmachine: (old-k8s-version-490984) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 18:38:04.667850   54981 main.go:141] libmachine: (old-k8s-version-490984) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 18:38:04.667867   54981 main.go:141] libmachine: (old-k8s-version-490984) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 18:38:04.667879   54981 main.go:141] libmachine: (old-k8s-version-490984) Creating domain...
	I0802 18:38:04.669023   54981 main.go:141] libmachine: (old-k8s-version-490984) define libvirt domain using xml: 
	I0802 18:38:04.669047   54981 main.go:141] libmachine: (old-k8s-version-490984) <domain type='kvm'>
	I0802 18:38:04.669054   54981 main.go:141] libmachine: (old-k8s-version-490984)   <name>old-k8s-version-490984</name>
	I0802 18:38:04.669060   54981 main.go:141] libmachine: (old-k8s-version-490984)   <memory unit='MiB'>2200</memory>
	I0802 18:38:04.669069   54981 main.go:141] libmachine: (old-k8s-version-490984)   <vcpu>2</vcpu>
	I0802 18:38:04.669076   54981 main.go:141] libmachine: (old-k8s-version-490984)   <features>
	I0802 18:38:04.669090   54981 main.go:141] libmachine: (old-k8s-version-490984)     <acpi/>
	I0802 18:38:04.669097   54981 main.go:141] libmachine: (old-k8s-version-490984)     <apic/>
	I0802 18:38:04.669117   54981 main.go:141] libmachine: (old-k8s-version-490984)     <pae/>
	I0802 18:38:04.669129   54981 main.go:141] libmachine: (old-k8s-version-490984)     
	I0802 18:38:04.669198   54981 main.go:141] libmachine: (old-k8s-version-490984)   </features>
	I0802 18:38:04.669228   54981 main.go:141] libmachine: (old-k8s-version-490984)   <cpu mode='host-passthrough'>
	I0802 18:38:04.669239   54981 main.go:141] libmachine: (old-k8s-version-490984)   
	I0802 18:38:04.669248   54981 main.go:141] libmachine: (old-k8s-version-490984)   </cpu>
	I0802 18:38:04.669260   54981 main.go:141] libmachine: (old-k8s-version-490984)   <os>
	I0802 18:38:04.669271   54981 main.go:141] libmachine: (old-k8s-version-490984)     <type>hvm</type>
	I0802 18:38:04.669283   54981 main.go:141] libmachine: (old-k8s-version-490984)     <boot dev='cdrom'/>
	I0802 18:38:04.669293   54981 main.go:141] libmachine: (old-k8s-version-490984)     <boot dev='hd'/>
	I0802 18:38:04.669313   54981 main.go:141] libmachine: (old-k8s-version-490984)     <bootmenu enable='no'/>
	I0802 18:38:04.669328   54981 main.go:141] libmachine: (old-k8s-version-490984)   </os>
	I0802 18:38:04.669345   54981 main.go:141] libmachine: (old-k8s-version-490984)   <devices>
	I0802 18:38:04.669362   54981 main.go:141] libmachine: (old-k8s-version-490984)     <disk type='file' device='cdrom'>
	I0802 18:38:04.669379   54981 main.go:141] libmachine: (old-k8s-version-490984)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/boot2docker.iso'/>
	I0802 18:38:04.669390   54981 main.go:141] libmachine: (old-k8s-version-490984)       <target dev='hdc' bus='scsi'/>
	I0802 18:38:04.669402   54981 main.go:141] libmachine: (old-k8s-version-490984)       <readonly/>
	I0802 18:38:04.669410   54981 main.go:141] libmachine: (old-k8s-version-490984)     </disk>
	I0802 18:38:04.669443   54981 main.go:141] libmachine: (old-k8s-version-490984)     <disk type='file' device='disk'>
	I0802 18:38:04.669462   54981 main.go:141] libmachine: (old-k8s-version-490984)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 18:38:04.669479   54981 main.go:141] libmachine: (old-k8s-version-490984)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/old-k8s-version-490984.rawdisk'/>
	I0802 18:38:04.669496   54981 main.go:141] libmachine: (old-k8s-version-490984)       <target dev='hda' bus='virtio'/>
	I0802 18:38:04.669504   54981 main.go:141] libmachine: (old-k8s-version-490984)     </disk>
	I0802 18:38:04.669509   54981 main.go:141] libmachine: (old-k8s-version-490984)     <interface type='network'>
	I0802 18:38:04.669517   54981 main.go:141] libmachine: (old-k8s-version-490984)       <source network='mk-old-k8s-version-490984'/>
	I0802 18:38:04.669522   54981 main.go:141] libmachine: (old-k8s-version-490984)       <model type='virtio'/>
	I0802 18:38:04.669528   54981 main.go:141] libmachine: (old-k8s-version-490984)     </interface>
	I0802 18:38:04.669539   54981 main.go:141] libmachine: (old-k8s-version-490984)     <interface type='network'>
	I0802 18:38:04.669551   54981 main.go:141] libmachine: (old-k8s-version-490984)       <source network='default'/>
	I0802 18:38:04.669562   54981 main.go:141] libmachine: (old-k8s-version-490984)       <model type='virtio'/>
	I0802 18:38:04.669571   54981 main.go:141] libmachine: (old-k8s-version-490984)     </interface>
	I0802 18:38:04.669581   54981 main.go:141] libmachine: (old-k8s-version-490984)     <serial type='pty'>
	I0802 18:38:04.669590   54981 main.go:141] libmachine: (old-k8s-version-490984)       <target port='0'/>
	I0802 18:38:04.669599   54981 main.go:141] libmachine: (old-k8s-version-490984)     </serial>
	I0802 18:38:04.669637   54981 main.go:141] libmachine: (old-k8s-version-490984)     <console type='pty'>
	I0802 18:38:04.669657   54981 main.go:141] libmachine: (old-k8s-version-490984)       <target type='serial' port='0'/>
	I0802 18:38:04.669682   54981 main.go:141] libmachine: (old-k8s-version-490984)     </console>
	I0802 18:38:04.669693   54981 main.go:141] libmachine: (old-k8s-version-490984)     <rng model='virtio'>
	I0802 18:38:04.669704   54981 main.go:141] libmachine: (old-k8s-version-490984)       <backend model='random'>/dev/random</backend>
	I0802 18:38:04.669715   54981 main.go:141] libmachine: (old-k8s-version-490984)     </rng>
	I0802 18:38:04.669724   54981 main.go:141] libmachine: (old-k8s-version-490984)     
	I0802 18:38:04.669738   54981 main.go:141] libmachine: (old-k8s-version-490984)     
	I0802 18:38:04.669749   54981 main.go:141] libmachine: (old-k8s-version-490984)   </devices>
	I0802 18:38:04.669758   54981 main.go:141] libmachine: (old-k8s-version-490984) </domain>
	I0802 18:38:04.669768   54981 main.go:141] libmachine: (old-k8s-version-490984) 
	I0802 18:38:04.673662   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:82:58:38 in network default
	I0802 18:38:04.674202   54981 main.go:141] libmachine: (old-k8s-version-490984) Ensuring networks are active...
	I0802 18:38:04.674220   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:04.674978   54981 main.go:141] libmachine: (old-k8s-version-490984) Ensuring network default is active
	I0802 18:38:04.675318   54981 main.go:141] libmachine: (old-k8s-version-490984) Ensuring network mk-old-k8s-version-490984 is active
	I0802 18:38:04.675882   54981 main.go:141] libmachine: (old-k8s-version-490984) Getting domain xml...
	I0802 18:38:04.676619   54981 main.go:141] libmachine: (old-k8s-version-490984) Creating domain...
	I0802 18:38:06.102686   54981 main.go:141] libmachine: (old-k8s-version-490984) Waiting to get IP...
	I0802 18:38:06.103944   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:06.104529   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:06.104557   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:06.104494   55091 retry.go:31] will retry after 200.743574ms: waiting for machine to come up
	I0802 18:38:06.307247   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:06.307789   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:06.307819   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:06.307738   55091 retry.go:31] will retry after 286.61482ms: waiting for machine to come up
	I0802 18:38:06.596392   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:06.596999   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:06.597031   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:06.596945   55091 retry.go:31] will retry after 481.775764ms: waiting for machine to come up
	I0802 18:38:07.080880   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:07.081432   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:07.081459   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:07.081382   55091 retry.go:31] will retry after 412.019681ms: waiting for machine to come up
	I0802 18:38:07.494918   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:07.495616   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:07.495650   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:07.495540   55091 retry.go:31] will retry after 489.690456ms: waiting for machine to come up
	I0802 18:38:07.987540   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:07.988129   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:07.988158   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:07.988074   55091 retry.go:31] will retry after 942.202461ms: waiting for machine to come up
	I0802 18:38:08.931779   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:08.932302   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:08.932355   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:08.932252   55091 retry.go:31] will retry after 753.22966ms: waiting for machine to come up
	I0802 18:38:09.686608   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:09.687131   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:09.687162   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:09.687054   55091 retry.go:31] will retry after 1.05305244s: waiting for machine to come up
	I0802 18:38:10.742397   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:10.742910   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:10.742938   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:10.742856   55091 retry.go:31] will retry after 1.721802312s: waiting for machine to come up
	I0802 18:38:12.466890   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:12.467411   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:12.467433   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:12.467345   55091 retry.go:31] will retry after 1.772623648s: waiting for machine to come up
	I0802 18:38:14.241209   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:14.241794   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:14.241815   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:14.241729   55091 retry.go:31] will retry after 2.14047955s: waiting for machine to come up
	I0802 18:38:16.384395   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:16.384982   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:16.385012   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:16.384919   55091 retry.go:31] will retry after 2.73530416s: waiting for machine to come up
	I0802 18:38:19.121850   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:19.122353   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:19.122378   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:19.122301   55091 retry.go:31] will retry after 4.005567122s: waiting for machine to come up
	I0802 18:38:23.310821   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:23.311199   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:38:23.311228   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:38:23.311161   55091 retry.go:31] will retry after 5.111276832s: waiting for machine to come up
	I0802 18:38:28.427548   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.428243   54981 main.go:141] libmachine: (old-k8s-version-490984) Found IP for machine: 192.168.50.104
	I0802 18:38:28.428270   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has current primary IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.428278   54981 main.go:141] libmachine: (old-k8s-version-490984) Reserving static IP address...
	I0802 18:38:28.428722   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-490984", mac: "52:54:00:e1:cb:7a", ip: "192.168.50.104"} in network mk-old-k8s-version-490984
	I0802 18:38:28.504964   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Getting to WaitForSSH function...
	I0802 18:38:28.504993   54981 main.go:141] libmachine: (old-k8s-version-490984) Reserved static IP address: 192.168.50.104
	I0802 18:38:28.505005   54981 main.go:141] libmachine: (old-k8s-version-490984) Waiting for SSH to be available...
	I0802 18:38:28.507648   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.508032   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:28.508065   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.508258   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Using SSH client type: external
	I0802 18:38:28.508286   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa (-rw-------)
	I0802 18:38:28.508335   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 18:38:28.508354   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | About to run SSH command:
	I0802 18:38:28.508367   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | exit 0
	I0802 18:38:28.626970   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | SSH cmd err, output: <nil>: 
	I0802 18:38:28.627278   54981 main.go:141] libmachine: (old-k8s-version-490984) KVM machine creation complete!
	I0802 18:38:28.627519   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetConfigRaw
	I0802 18:38:28.628065   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:38:28.628250   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:38:28.628441   54981 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 18:38:28.628460   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetState
	I0802 18:38:28.629583   54981 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 18:38:28.629599   54981 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 18:38:28.629607   54981 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 18:38:28.629614   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:38:28.632367   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.632689   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:28.632709   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.632850   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:38:28.633045   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:28.633196   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:28.633341   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:38:28.633518   54981 main.go:141] libmachine: Using SSH client type: native
	I0802 18:38:28.633726   54981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:38:28.633739   54981 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 18:38:28.730337   54981 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:38:28.730362   54981 main.go:141] libmachine: Detecting the provisioner...
	I0802 18:38:28.730370   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:38:28.733107   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.733447   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:28.733489   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.733612   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:38:28.733847   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:28.734010   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:28.734170   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:38:28.734325   54981 main.go:141] libmachine: Using SSH client type: native
	I0802 18:38:28.734507   54981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:38:28.734521   54981 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 18:38:28.835553   54981 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 18:38:28.835660   54981 main.go:141] libmachine: found compatible host: buildroot
	I0802 18:38:28.835670   54981 main.go:141] libmachine: Provisioning with buildroot...
	I0802 18:38:28.835681   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetMachineName
	I0802 18:38:28.835948   54981 buildroot.go:166] provisioning hostname "old-k8s-version-490984"
	I0802 18:38:28.835979   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetMachineName
	I0802 18:38:28.836151   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:38:28.838738   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.839076   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:28.839094   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.839269   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:38:28.839446   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:28.839679   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:28.839850   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:38:28.840039   54981 main.go:141] libmachine: Using SSH client type: native
	I0802 18:38:28.840237   54981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:38:28.840257   54981 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-490984 && echo "old-k8s-version-490984" | sudo tee /etc/hostname
	I0802 18:38:28.948970   54981 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-490984
	
	I0802 18:38:28.949004   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:38:28.951665   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.952057   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:28.952092   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:28.952154   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:38:28.952414   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:28.952596   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:28.952715   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:38:28.952871   54981 main.go:141] libmachine: Using SSH client type: native
	I0802 18:38:28.953039   54981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:38:28.953057   54981 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-490984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-490984/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-490984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:38:29.059542   54981 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:38:29.059602   54981 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:38:29.059625   54981 buildroot.go:174] setting up certificates
	I0802 18:38:29.059636   54981 provision.go:84] configureAuth start
	I0802 18:38:29.059650   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetMachineName
	I0802 18:38:29.059987   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:38:29.062548   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.062818   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:29.062863   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.062935   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:38:29.065241   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.065535   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:29.065560   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.065699   54981 provision.go:143] copyHostCerts
	I0802 18:38:29.065756   54981 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:38:29.065768   54981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:38:29.065836   54981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:38:29.065953   54981 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:38:29.065964   54981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:38:29.065999   54981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:38:29.066086   54981 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:38:29.066096   54981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:38:29.066126   54981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:38:29.066199   54981 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-490984 san=[127.0.0.1 192.168.50.104 localhost minikube old-k8s-version-490984]
	I0802 18:38:29.108630   54981 provision.go:177] copyRemoteCerts
	I0802 18:38:29.108702   54981 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:38:29.108735   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:38:29.111543   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.111882   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:29.111918   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.112046   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:38:29.112232   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:29.112371   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:38:29.112507   54981 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:38:29.192695   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:38:29.218435   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0802 18:38:29.242236   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 18:38:29.264787   54981 provision.go:87] duration metric: took 205.138681ms to configureAuth
	I0802 18:38:29.264816   54981 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:38:29.264976   54981 config.go:182] Loaded profile config "old-k8s-version-490984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0802 18:38:29.265058   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:38:29.268190   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.268611   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:29.268634   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.268848   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:38:29.269055   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:29.269217   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:29.269384   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:38:29.269542   54981 main.go:141] libmachine: Using SSH client type: native
	I0802 18:38:29.269691   54981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:38:29.269707   54981 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:38:29.549718   54981 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:38:29.549748   54981 main.go:141] libmachine: Checking connection to Docker...
	I0802 18:38:29.549760   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetURL
	I0802 18:38:29.551060   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | Using libvirt version 6000000
	I0802 18:38:29.553433   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.553787   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:29.553814   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.553988   54981 main.go:141] libmachine: Docker is up and running!
	I0802 18:38:29.554005   54981 main.go:141] libmachine: Reticulating splines...
	I0802 18:38:29.554013   54981 client.go:171] duration metric: took 25.621627977s to LocalClient.Create
	I0802 18:38:29.554037   54981 start.go:167] duration metric: took 25.621701227s to libmachine.API.Create "old-k8s-version-490984"
	I0802 18:38:29.554049   54981 start.go:293] postStartSetup for "old-k8s-version-490984" (driver="kvm2")
	I0802 18:38:29.554061   54981 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:38:29.554096   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:38:29.554355   54981 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:38:29.554387   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:38:29.556482   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.556793   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:29.556821   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.557007   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:38:29.557197   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:29.557357   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:38:29.557513   54981 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:38:29.636984   54981 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:38:29.641086   54981 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:38:29.641107   54981 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:38:29.641159   54981 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:38:29.641231   54981 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:38:29.641318   54981 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:38:29.649629   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:38:29.672098   54981 start.go:296] duration metric: took 118.034286ms for postStartSetup
	I0802 18:38:29.672154   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetConfigRaw
	I0802 18:38:29.672722   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:38:29.675304   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.675692   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:29.675723   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.675948   54981 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/config.json ...
	I0802 18:38:29.676181   54981 start.go:128] duration metric: took 25.764227127s to createHost
	I0802 18:38:29.676210   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:38:29.678090   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.678435   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:29.678457   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.678619   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:38:29.678810   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:29.679028   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:29.679198   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:38:29.679365   54981 main.go:141] libmachine: Using SSH client type: native
	I0802 18:38:29.679574   54981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:38:29.679587   54981 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0802 18:38:29.779792   54981 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722623909.747301795
	
	I0802 18:38:29.779832   54981 fix.go:216] guest clock: 1722623909.747301795
	I0802 18:38:29.779844   54981 fix.go:229] Guest: 2024-08-02 18:38:29.747301795 +0000 UTC Remote: 2024-08-02 18:38:29.67619577 +0000 UTC m=+39.671259676 (delta=71.106025ms)
	I0802 18:38:29.779871   54981 fix.go:200] guest clock delta is within tolerance: 71.106025ms
	I0802 18:38:29.779879   54981 start.go:83] releasing machines lock for "old-k8s-version-490984", held for 25.868128343s
	I0802 18:38:29.779913   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:38:29.780177   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:38:29.783065   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.783494   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:29.783526   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.783657   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:38:29.784296   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:38:29.784489   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:38:29.784589   54981 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:38:29.784635   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:38:29.784722   54981 ssh_runner.go:195] Run: cat /version.json
	I0802 18:38:29.784748   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:38:29.787143   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.787571   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:29.787606   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.787819   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:38:29.788004   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:29.788004   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.788173   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:38:29.788330   54981 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:38:29.788347   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:29.788372   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:29.788522   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:38:29.788659   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:38:29.788834   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:38:29.788990   54981 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:38:29.900731   54981 ssh_runner.go:195] Run: systemctl --version
	I0802 18:38:29.906486   54981 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:38:30.057519   54981 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:38:30.063539   54981 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:38:30.063622   54981 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:38:30.078575   54981 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 18:38:30.078609   54981 start.go:495] detecting cgroup driver to use...
	I0802 18:38:30.078674   54981 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:38:30.095764   54981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:38:30.111531   54981 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:38:30.111575   54981 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:38:30.126798   54981 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:38:30.141307   54981 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:38:30.267611   54981 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:38:30.418494   54981 docker.go:233] disabling docker service ...
	I0802 18:38:30.418570   54981 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:38:30.433962   54981 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:38:30.452382   54981 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:38:30.577261   54981 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:38:30.708783   54981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:38:30.723206   54981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:38:30.740963   54981 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0802 18:38:30.741034   54981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:38:30.751036   54981 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:38:30.751141   54981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:38:30.761038   54981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:38:30.771159   54981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:38:30.781727   54981 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:38:30.791998   54981 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:38:30.801648   54981 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 18:38:30.801707   54981 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 18:38:30.817710   54981 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:38:30.828644   54981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:38:30.957522   54981 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:38:31.099407   54981 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:38:31.099489   54981 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:38:31.104718   54981 start.go:563] Will wait 60s for crictl version
	I0802 18:38:31.104792   54981 ssh_runner.go:195] Run: which crictl
	I0802 18:38:31.108487   54981 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:38:31.151632   54981 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:38:31.151735   54981 ssh_runner.go:195] Run: crio --version
	I0802 18:38:31.180325   54981 ssh_runner.go:195] Run: crio --version
	I0802 18:38:31.209395   54981 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0802 18:38:31.210819   54981 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:38:31.213991   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:31.214400   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:38:18 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:38:31.214431   54981 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:38:31.214687   54981 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0802 18:38:31.218678   54981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:38:31.230821   54981 kubeadm.go:883] updating cluster {Name:old-k8s-version-490984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:38:31.230947   54981 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0802 18:38:31.230996   54981 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:38:31.267435   54981 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0802 18:38:31.267517   54981 ssh_runner.go:195] Run: which lz4
	I0802 18:38:31.272331   54981 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0802 18:38:31.277317   54981 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 18:38:31.277354   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0802 18:38:32.761021   54981 crio.go:462] duration metric: took 1.488737357s to copy over tarball
	I0802 18:38:32.761088   54981 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 18:38:35.280502   54981 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.519386928s)
	I0802 18:38:35.280537   54981 crio.go:469] duration metric: took 2.519488566s to extract the tarball
	I0802 18:38:35.280547   54981 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 18:38:35.322670   54981 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:38:35.367761   54981 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0802 18:38:35.367790   54981 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0802 18:38:35.367866   54981 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:38:35.367889   54981 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:38:35.367941   54981 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0802 18:38:35.367950   54981 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:38:35.367991   54981 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0802 18:38:35.367891   54981 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:38:35.367869   54981 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:38:35.368102   54981 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:38:35.369825   54981 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:38:35.370030   54981 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:38:35.370128   54981 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:38:35.370328   54981 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:38:35.370461   54981 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:38:35.370681   54981 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0802 18:38:35.370806   54981 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0802 18:38:35.370967   54981 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:38:35.605208   54981 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0802 18:38:35.632341   54981 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:38:35.637233   54981 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:38:35.642401   54981 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:38:35.647786   54981 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0802 18:38:35.647831   54981 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0802 18:38:35.647874   54981 ssh_runner.go:195] Run: which crictl
	I0802 18:38:35.649829   54981 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0802 18:38:35.662003   54981 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:38:35.691063   54981 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0802 18:38:35.742909   54981 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0802 18:38:35.742961   54981 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:38:35.742963   54981 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0802 18:38:35.742994   54981 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:38:35.743018   54981 ssh_runner.go:195] Run: which crictl
	I0802 18:38:35.743176   54981 ssh_runner.go:195] Run: which crictl
	I0802 18:38:35.756101   54981 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0802 18:38:35.756152   54981 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:38:35.756157   54981 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0802 18:38:35.756192   54981 ssh_runner.go:195] Run: which crictl
	I0802 18:38:35.801794   54981 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0802 18:38:35.801815   54981 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0802 18:38:35.801840   54981 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:38:35.801849   54981 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:38:35.801889   54981 ssh_runner.go:195] Run: which crictl
	I0802 18:38:35.801893   54981 ssh_runner.go:195] Run: which crictl
	I0802 18:38:35.801934   54981 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0802 18:38:35.801962   54981 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0802 18:38:35.802000   54981 ssh_runner.go:195] Run: which crictl
	I0802 18:38:35.802005   54981 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:38:35.802000   54981 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:38:35.830911   54981 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:38:35.830956   54981 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0802 18:38:35.830959   54981 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:38:35.831023   54981 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0802 18:38:35.926490   54981 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0802 18:38:35.926576   54981 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0802 18:38:35.926593   54981 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0802 18:38:35.926639   54981 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0802 18:38:35.944087   54981 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0802 18:38:35.944134   54981 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0802 18:38:35.963198   54981 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0802 18:38:36.253937   54981 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:38:36.396468   54981 cache_images.go:92] duration metric: took 1.028656858s to LoadCachedImages
	W0802 18:38:36.396576   54981 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0802 18:38:36.396600   54981 kubeadm.go:934] updating node { 192.168.50.104 8443 v1.20.0 crio true true} ...
	I0802 18:38:36.396708   54981 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-490984 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:38:36.396790   54981 ssh_runner.go:195] Run: crio config
	I0802 18:38:36.446634   54981 cni.go:84] Creating CNI manager for ""
	I0802 18:38:36.446659   54981 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:38:36.446671   54981 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:38:36.446694   54981 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-490984 NodeName:old-k8s-version-490984 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0802 18:38:36.446867   54981 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-490984"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:38:36.446939   54981 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0802 18:38:36.457063   54981 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:38:36.457151   54981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:38:36.466741   54981 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0802 18:38:36.483482   54981 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:38:36.504586   54981 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0802 18:38:36.520921   54981 ssh_runner.go:195] Run: grep 192.168.50.104	control-plane.minikube.internal$ /etc/hosts
	I0802 18:38:36.524505   54981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:38:36.535830   54981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:38:36.657010   54981 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:38:36.675436   54981 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984 for IP: 192.168.50.104
	I0802 18:38:36.675465   54981 certs.go:194] generating shared ca certs ...
	I0802 18:38:36.675484   54981 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:38:36.675641   54981 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:38:36.675690   54981 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:38:36.675704   54981 certs.go:256] generating profile certs ...
	I0802 18:38:36.675772   54981 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/client.key
	I0802 18:38:36.675805   54981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/client.crt with IP's: []
	I0802 18:38:37.044297   54981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/client.crt ...
	I0802 18:38:37.044330   54981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/client.crt: {Name:mk5dc7d2de3249458e397031198ef23511a1d9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:38:37.044494   54981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/client.key ...
	I0802 18:38:37.044508   54981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/client.key: {Name:mkb85a8c84c756b197b473e7ba76dfdb4d9b92ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:38:37.044611   54981 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.key.64198073
	I0802 18:38:37.044629   54981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.crt.64198073 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.104]
	I0802 18:38:37.152972   54981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.crt.64198073 ...
	I0802 18:38:37.153006   54981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.crt.64198073: {Name:mka1e7fe9d19c271a3106cd7c97889a4de25691e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:38:37.153165   54981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.key.64198073 ...
	I0802 18:38:37.153177   54981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.key.64198073: {Name:mke7b29d3989ff50b1b448b59673de4896c22b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:38:37.153253   54981 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.crt.64198073 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.crt
	I0802 18:38:37.153329   54981 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.key.64198073 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.key
	I0802 18:38:37.153380   54981 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.key
	I0802 18:38:37.153397   54981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.crt with IP's: []
	I0802 18:38:37.302690   54981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.crt ...
	I0802 18:38:37.302725   54981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.crt: {Name:mkb0727b5aee4e9820438678e5a28d60e1ce458c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:38:37.302896   54981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.key ...
	I0802 18:38:37.302907   54981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.key: {Name:mk9af6ac7479e428038bc11a9ab8692cf7fcb38d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:38:37.303069   54981 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:38:37.303125   54981 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:38:37.303139   54981 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:38:37.303171   54981 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:38:37.303197   54981 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:38:37.303218   54981 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:38:37.303259   54981 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:38:37.303829   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:38:37.331256   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:38:37.355643   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:38:37.380655   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:38:37.404392   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0802 18:38:37.430556   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 18:38:37.455859   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:38:37.487576   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 18:38:37.525749   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:38:37.550198   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:38:37.573882   54981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:38:37.598068   54981 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:38:37.614475   54981 ssh_runner.go:195] Run: openssl version
	I0802 18:38:37.620390   54981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:38:37.631191   54981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:38:37.635832   54981 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:38:37.635890   54981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:38:37.641938   54981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:38:37.653003   54981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:38:37.668099   54981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:38:37.673744   54981 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:38:37.673817   54981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:38:37.681324   54981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:38:37.693524   54981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:38:37.704428   54981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:38:37.708882   54981 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:38:37.708937   54981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:38:37.714499   54981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:38:37.726031   54981 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:38:37.730912   54981 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 18:38:37.730966   54981 kubeadm.go:392] StartCluster: {Name:old-k8s-version-490984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:38:37.731033   54981 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:38:37.731078   54981 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:38:37.769768   54981 cri.go:89] found id: ""
	I0802 18:38:37.769848   54981 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 18:38:37.780226   54981 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 18:38:37.789805   54981 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:38:37.799364   54981 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:38:37.799392   54981 kubeadm.go:157] found existing configuration files:
	
	I0802 18:38:37.799448   54981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:38:37.808416   54981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:38:37.808487   54981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:38:37.817763   54981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:38:37.826548   54981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:38:37.826600   54981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:38:37.835817   54981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:38:37.844825   54981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:38:37.844889   54981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:38:37.854758   54981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:38:37.863777   54981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:38:37.863833   54981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:38:37.875731   54981 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 18:38:38.122300   54981 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 18:40:35.480519   54981 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0802 18:40:35.480609   54981 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0802 18:40:35.482324   54981 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0802 18:40:35.482392   54981 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:40:35.482487   54981 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:40:35.482613   54981 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:40:35.482757   54981 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:40:35.482816   54981 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:40:35.484563   54981 out.go:204]   - Generating certificates and keys ...
	I0802 18:40:35.484644   54981 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:40:35.484722   54981 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:40:35.484812   54981 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 18:40:35.484892   54981 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 18:40:35.484973   54981 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 18:40:35.485040   54981 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 18:40:35.485115   54981 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 18:40:35.485306   54981 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-490984] and IPs [192.168.50.104 127.0.0.1 ::1]
	I0802 18:40:35.485386   54981 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 18:40:35.485487   54981 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-490984] and IPs [192.168.50.104 127.0.0.1 ::1]
	I0802 18:40:35.485569   54981 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 18:40:35.485659   54981 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 18:40:35.485716   54981 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 18:40:35.485804   54981 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:40:35.485888   54981 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:40:35.485958   54981 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:40:35.486036   54981 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:40:35.486120   54981 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:40:35.486270   54981 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:40:35.486389   54981 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:40:35.486451   54981 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:40:35.486544   54981 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:40:35.487990   54981 out.go:204]   - Booting up control plane ...
	I0802 18:40:35.488084   54981 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:40:35.488164   54981 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:40:35.488248   54981 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:40:35.488352   54981 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:40:35.488507   54981 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 18:40:35.488585   54981 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:40:35.488695   54981 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:40:35.488907   54981 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:40:35.489013   54981 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:40:35.489202   54981 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:40:35.489273   54981 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:40:35.489485   54981 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:40:35.489552   54981 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:40:35.489721   54981 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:40:35.489799   54981 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:40:35.489956   54981 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:40:35.489964   54981 kubeadm.go:310] 
	I0802 18:40:35.490017   54981 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0802 18:40:35.490057   54981 kubeadm.go:310] 		timed out waiting for the condition
	I0802 18:40:35.490063   54981 kubeadm.go:310] 
	I0802 18:40:35.490089   54981 kubeadm.go:310] 	This error is likely caused by:
	I0802 18:40:35.490123   54981 kubeadm.go:310] 		- The kubelet is not running
	I0802 18:40:35.490207   54981 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0802 18:40:35.490214   54981 kubeadm.go:310] 
	I0802 18:40:35.490308   54981 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0802 18:40:35.490365   54981 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0802 18:40:35.490395   54981 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0802 18:40:35.490401   54981 kubeadm.go:310] 
	I0802 18:40:35.490557   54981 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0802 18:40:35.490677   54981 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0802 18:40:35.490685   54981 kubeadm.go:310] 
	I0802 18:40:35.490770   54981 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0802 18:40:35.490848   54981 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0802 18:40:35.490942   54981 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0802 18:40:35.491019   54981 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0802 18:40:35.491045   54981 kubeadm.go:310] 
	W0802 18:40:35.491221   54981 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-490984] and IPs [192.168.50.104 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-490984] and IPs [192.168.50.104 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-490984] and IPs [192.168.50.104 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-490984] and IPs [192.168.50.104 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0802 18:40:35.491278   54981 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0802 18:40:35.947033   54981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:40:35.961410   54981 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:40:35.971775   54981 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:40:35.971793   54981 kubeadm.go:157] found existing configuration files:
	
	I0802 18:40:35.971850   54981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:40:35.981158   54981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:40:35.981226   54981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:40:35.990032   54981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:40:35.999133   54981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:40:35.999199   54981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:40:36.008540   54981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:40:36.018714   54981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:40:36.018772   54981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:40:36.027614   54981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:40:36.035926   54981 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:40:36.035972   54981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:40:36.045428   54981 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 18:40:36.277791   54981 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 18:42:32.188356   54981 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0802 18:42:32.188504   54981 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0802 18:42:32.190562   54981 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0802 18:42:32.190633   54981 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:42:32.190725   54981 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:42:32.190850   54981 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:42:32.190973   54981 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:42:32.191063   54981 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:42:32.192798   54981 out.go:204]   - Generating certificates and keys ...
	I0802 18:42:32.192910   54981 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:42:32.193000   54981 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:42:32.193110   54981 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 18:42:32.193182   54981 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 18:42:32.193240   54981 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 18:42:32.193285   54981 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 18:42:32.193339   54981 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 18:42:32.193390   54981 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 18:42:32.193451   54981 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 18:42:32.193541   54981 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 18:42:32.193589   54981 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 18:42:32.193652   54981 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:42:32.193744   54981 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:42:32.193821   54981 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:42:32.193906   54981 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:42:32.193980   54981 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:42:32.194118   54981 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:42:32.194220   54981 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:42:32.194278   54981 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:42:32.194350   54981 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:42:32.195869   54981 out.go:204]   - Booting up control plane ...
	I0802 18:42:32.195966   54981 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:42:32.196031   54981 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:42:32.196086   54981 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:42:32.196166   54981 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:42:32.196348   54981 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 18:42:32.196403   54981 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:42:32.196483   54981 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:42:32.196710   54981 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:42:32.196774   54981 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:42:32.196940   54981 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:42:32.197036   54981 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:42:32.197196   54981 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:42:32.197288   54981 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:42:32.197506   54981 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:42:32.197582   54981 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:42:32.197748   54981 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:42:32.197756   54981 kubeadm.go:310] 
	I0802 18:42:32.197803   54981 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0802 18:42:32.197862   54981 kubeadm.go:310] 		timed out waiting for the condition
	I0802 18:42:32.197871   54981 kubeadm.go:310] 
	I0802 18:42:32.197920   54981 kubeadm.go:310] 	This error is likely caused by:
	I0802 18:42:32.197970   54981 kubeadm.go:310] 		- The kubelet is not running
	I0802 18:42:32.198105   54981 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0802 18:42:32.198113   54981 kubeadm.go:310] 
	I0802 18:42:32.198194   54981 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0802 18:42:32.198236   54981 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0802 18:42:32.198282   54981 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0802 18:42:32.198296   54981 kubeadm.go:310] 
	I0802 18:42:32.198422   54981 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0802 18:42:32.198519   54981 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0802 18:42:32.198530   54981 kubeadm.go:310] 
	I0802 18:42:32.198655   54981 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0802 18:42:32.198757   54981 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0802 18:42:32.198862   54981 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0802 18:42:32.198959   54981 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0802 18:42:32.198980   54981 kubeadm.go:310] 
	I0802 18:42:32.199054   54981 kubeadm.go:394] duration metric: took 3m54.468093451s to StartCluster
	I0802 18:42:32.199124   54981 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:42:32.199313   54981 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:42:32.249802   54981 cri.go:89] found id: ""
	I0802 18:42:32.249828   54981 logs.go:276] 0 containers: []
	W0802 18:42:32.249841   54981 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:42:32.249848   54981 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:42:32.249913   54981 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:42:32.283815   54981 cri.go:89] found id: ""
	I0802 18:42:32.283839   54981 logs.go:276] 0 containers: []
	W0802 18:42:32.283847   54981 logs.go:278] No container was found matching "etcd"
	I0802 18:42:32.283853   54981 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:42:32.283901   54981 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:42:32.325006   54981 cri.go:89] found id: ""
	I0802 18:42:32.325037   54981 logs.go:276] 0 containers: []
	W0802 18:42:32.325047   54981 logs.go:278] No container was found matching "coredns"
	I0802 18:42:32.325055   54981 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:42:32.325120   54981 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:42:32.375445   54981 cri.go:89] found id: ""
	I0802 18:42:32.375477   54981 logs.go:276] 0 containers: []
	W0802 18:42:32.375485   54981 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:42:32.375493   54981 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:42:32.375557   54981 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:42:32.412814   54981 cri.go:89] found id: ""
	I0802 18:42:32.412841   54981 logs.go:276] 0 containers: []
	W0802 18:42:32.412852   54981 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:42:32.412858   54981 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:42:32.412927   54981 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:42:32.446257   54981 cri.go:89] found id: ""
	I0802 18:42:32.446291   54981 logs.go:276] 0 containers: []
	W0802 18:42:32.446302   54981 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:42:32.446311   54981 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:42:32.446376   54981 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:42:32.479307   54981 cri.go:89] found id: ""
	I0802 18:42:32.479330   54981 logs.go:276] 0 containers: []
	W0802 18:42:32.479340   54981 logs.go:278] No container was found matching "kindnet"
	I0802 18:42:32.479352   54981 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:42:32.479368   54981 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:42:32.588402   54981 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:42:32.588428   54981 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:42:32.588447   54981 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:42:32.680153   54981 logs.go:123] Gathering logs for container status ...
	I0802 18:42:32.680192   54981 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:42:32.714847   54981 logs.go:123] Gathering logs for kubelet ...
	I0802 18:42:32.714875   54981 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:42:32.762589   54981 logs.go:123] Gathering logs for dmesg ...
	I0802 18:42:32.762627   54981 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0802 18:42:32.775549   54981 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0802 18:42:32.775601   54981 out.go:239] * 
	* 
	W0802 18:42:32.775658   54981 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:42:32.775680   54981 out.go:239] * 
	* 
	W0802 18:42:32.776610   54981 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:42:32.779698   54981 out.go:177] 
	W0802 18:42:32.780802   54981 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:42:32.780847   54981 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0802 18:42:32.780868   54981 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0802 18:42:32.782234   54981 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-490984 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 6 (241.877183ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:42:33.053693   57799 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-490984" does not appear in /home/jenkins/minikube-integration/19355-5397/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-490984" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (283.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-407306 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-407306 --alsologtostderr -v=3: exit status 82 (2m0.531267742s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-407306"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:41:20.191134   57413 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:41:20.191244   57413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:41:20.191252   57413 out.go:304] Setting ErrFile to fd 2...
	I0802 18:41:20.191256   57413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:41:20.191417   57413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:41:20.191624   57413 out.go:298] Setting JSON to false
	I0802 18:41:20.191703   57413 mustload.go:65] Loading cluster: no-preload-407306
	I0802 18:41:20.192018   57413 config.go:182] Loaded profile config "no-preload-407306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 18:41:20.192084   57413 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/no-preload-407306/config.json ...
	I0802 18:41:20.192269   57413 mustload.go:65] Loading cluster: no-preload-407306
	I0802 18:41:20.192368   57413 config.go:182] Loaded profile config "no-preload-407306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 18:41:20.192391   57413 stop.go:39] StopHost: no-preload-407306
	I0802 18:41:20.192770   57413 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:41:20.192825   57413 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:41:20.207933   57413 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37637
	I0802 18:41:20.208347   57413 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:41:20.208919   57413 main.go:141] libmachine: Using API Version  1
	I0802 18:41:20.208938   57413 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:41:20.209259   57413 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:41:20.211606   57413 out.go:177] * Stopping node "no-preload-407306"  ...
	I0802 18:41:20.212865   57413 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0802 18:41:20.212887   57413 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	I0802 18:41:20.213116   57413 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0802 18:41:20.213158   57413 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:41:20.215960   57413 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:41:20.216314   57413 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:39:04 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:41:20.216342   57413 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:41:20.216477   57413 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:41:20.216663   57413 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:41:20.216812   57413 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:41:20.216955   57413 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/no-preload-407306/id_rsa Username:docker}
	I0802 18:41:20.347662   57413 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0802 18:41:20.406611   57413 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0802 18:41:20.480464   57413 main.go:141] libmachine: Stopping "no-preload-407306"...
	I0802 18:41:20.480491   57413 main.go:141] libmachine: (no-preload-407306) Calling .GetState
	I0802 18:41:20.482154   57413 main.go:141] libmachine: (no-preload-407306) Calling .Stop
	I0802 18:41:20.485538   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 0/120
	I0802 18:41:21.486918   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 1/120
	I0802 18:41:22.488602   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 2/120
	I0802 18:41:23.489975   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 3/120
	I0802 18:41:24.491415   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 4/120
	I0802 18:41:25.493566   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 5/120
	I0802 18:41:26.495005   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 6/120
	I0802 18:41:27.496478   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 7/120
	I0802 18:41:28.497905   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 8/120
	I0802 18:41:29.499580   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 9/120
	I0802 18:41:30.501632   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 10/120
	I0802 18:41:31.503646   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 11/120
	I0802 18:41:32.505456   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 12/120
	I0802 18:41:33.506871   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 13/120
	I0802 18:41:34.508352   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 14/120
	I0802 18:41:35.510328   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 15/120
	I0802 18:41:36.511847   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 16/120
	I0802 18:41:37.513708   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 17/120
	I0802 18:41:38.515195   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 18/120
	I0802 18:41:39.516466   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 19/120
	I0802 18:41:40.518698   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 20/120
	I0802 18:41:41.519993   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 21/120
	I0802 18:41:42.521629   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 22/120
	I0802 18:41:43.522982   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 23/120
	I0802 18:41:44.524441   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 24/120
	I0802 18:41:45.526437   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 25/120
	I0802 18:41:46.528342   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 26/120
	I0802 18:41:47.529693   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 27/120
	I0802 18:41:48.531064   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 28/120
	I0802 18:41:49.532460   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 29/120
	I0802 18:41:50.534630   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 30/120
	I0802 18:41:51.536197   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 31/120
	I0802 18:41:52.537871   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 32/120
	I0802 18:41:53.539326   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 33/120
	I0802 18:41:54.540569   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 34/120
	I0802 18:41:55.542482   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 35/120
	I0802 18:41:56.544001   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 36/120
	I0802 18:41:57.545396   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 37/120
	I0802 18:41:58.546778   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 38/120
	I0802 18:41:59.548242   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 39/120
	I0802 18:42:00.550215   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 40/120
	I0802 18:42:01.551660   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 41/120
	I0802 18:42:02.553526   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 42/120
	I0802 18:42:03.554907   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 43/120
	I0802 18:42:04.556262   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 44/120
	I0802 18:42:05.558115   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 45/120
	I0802 18:42:06.559531   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 46/120
	I0802 18:42:07.560932   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 47/120
	I0802 18:42:08.562328   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 48/120
	I0802 18:42:09.563884   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 49/120
	I0802 18:42:10.565927   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 50/120
	I0802 18:42:11.567191   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 51/120
	I0802 18:42:12.568396   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 52/120
	I0802 18:42:13.569661   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 53/120
	I0802 18:42:14.571012   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 54/120
	I0802 18:42:15.572398   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 55/120
	I0802 18:42:16.573774   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 56/120
	I0802 18:42:17.575376   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 57/120
	I0802 18:42:18.577643   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 58/120
	I0802 18:42:19.578886   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 59/120
	I0802 18:42:20.580934   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 60/120
	I0802 18:42:21.582299   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 61/120
	I0802 18:42:22.583642   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 62/120
	I0802 18:42:23.584998   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 63/120
	I0802 18:42:24.586372   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 64/120
	I0802 18:42:25.588305   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 65/120
	I0802 18:42:26.589581   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 66/120
	I0802 18:42:27.590828   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 67/120
	I0802 18:42:28.592392   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 68/120
	I0802 18:42:29.593551   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 69/120
	I0802 18:42:30.595553   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 70/120
	I0802 18:42:31.597160   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 71/120
	I0802 18:42:32.599078   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 72/120
	I0802 18:42:33.600162   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 73/120
	I0802 18:42:34.601498   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 74/120
	I0802 18:42:35.602990   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 75/120
	I0802 18:42:36.604435   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 76/120
	I0802 18:42:37.605872   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 77/120
	I0802 18:42:38.608148   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 78/120
	I0802 18:42:39.609861   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 79/120
	I0802 18:42:40.611973   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 80/120
	I0802 18:42:41.613431   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 81/120
	I0802 18:42:42.614855   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 82/120
	I0802 18:42:43.616073   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 83/120
	I0802 18:42:44.617431   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 84/120
	I0802 18:42:45.619158   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 85/120
	I0802 18:42:46.620894   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 86/120
	I0802 18:42:47.622266   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 87/120
	I0802 18:42:48.623600   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 88/120
	I0802 18:42:49.625495   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 89/120
	I0802 18:42:50.627617   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 90/120
	I0802 18:42:51.628972   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 91/120
	I0802 18:42:52.630244   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 92/120
	I0802 18:42:53.631761   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 93/120
	I0802 18:42:54.632952   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 94/120
	I0802 18:42:55.634317   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 95/120
	I0802 18:42:56.636763   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 96/120
	I0802 18:42:57.638030   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 97/120
	I0802 18:42:58.639349   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 98/120
	I0802 18:42:59.640767   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 99/120
	I0802 18:43:00.642757   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 100/120
	I0802 18:43:01.644059   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 101/120
	I0802 18:43:02.645313   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 102/120
	I0802 18:43:03.646649   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 103/120
	I0802 18:43:04.647806   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 104/120
	I0802 18:43:05.649255   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 105/120
	I0802 18:43:06.650637   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 106/120
	I0802 18:43:07.652744   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 107/120
	I0802 18:43:08.654098   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 108/120
	I0802 18:43:09.655776   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 109/120
	I0802 18:43:10.657885   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 110/120
	I0802 18:43:11.659389   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 111/120
	I0802 18:43:12.660777   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 112/120
	I0802 18:43:13.662193   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 113/120
	I0802 18:43:14.663928   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 114/120
	I0802 18:43:15.665593   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 115/120
	I0802 18:43:16.666885   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 116/120
	I0802 18:43:17.668306   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 117/120
	I0802 18:43:18.669575   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 118/120
	I0802 18:43:19.670964   57413 main.go:141] libmachine: (no-preload-407306) Waiting for machine to stop 119/120
	I0802 18:43:20.672370   57413 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0802 18:43:20.672424   57413 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0802 18:43:20.674213   57413 out.go:177] 
	W0802 18:43:20.675741   57413 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0802 18:43:20.675755   57413 out.go:239] * 
	* 
	W0802 18:43:20.678268   57413 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:43:20.680611   57413 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-407306 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306: exit status 3 (18.608477924s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:43:39.291425   58101 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.168:22: connect: no route to host
	E0802 18:43:39.291442   58101 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.168:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-407306" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-504903 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-504903 --alsologtostderr -v=3: exit status 82 (2m0.483131349s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-504903"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:42:17.871216   57731 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:42:17.871472   57731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:42:17.871481   57731 out.go:304] Setting ErrFile to fd 2...
	I0802 18:42:17.871485   57731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:42:17.871712   57731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:42:17.871928   57731 out.go:298] Setting JSON to false
	I0802 18:42:17.871999   57731 mustload.go:65] Loading cluster: default-k8s-diff-port-504903
	I0802 18:42:17.872292   57731 config.go:182] Loaded profile config "default-k8s-diff-port-504903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:42:17.872351   57731 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/config.json ...
	I0802 18:42:17.872517   57731 mustload.go:65] Loading cluster: default-k8s-diff-port-504903
	I0802 18:42:17.872662   57731 config.go:182] Loaded profile config "default-k8s-diff-port-504903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:42:17.872707   57731 stop.go:39] StopHost: default-k8s-diff-port-504903
	I0802 18:42:17.873080   57731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:42:17.873124   57731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:42:17.888056   57731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38699
	I0802 18:42:17.888525   57731 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:42:17.889093   57731 main.go:141] libmachine: Using API Version  1
	I0802 18:42:17.889116   57731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:42:17.889479   57731 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:42:17.891751   57731 out.go:177] * Stopping node "default-k8s-diff-port-504903"  ...
	I0802 18:42:17.892876   57731 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0802 18:42:17.892903   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:42:17.893148   57731 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0802 18:42:17.893181   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:42:17.896128   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:42:17.896517   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:40:46 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:42:17.896546   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:42:17.896639   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:42:17.896806   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:42:17.896950   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:42:17.897080   57731 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/default-k8s-diff-port-504903/id_rsa Username:docker}
	I0802 18:42:17.995617   57731 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0802 18:42:18.053615   57731 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0802 18:42:18.112289   57731 main.go:141] libmachine: Stopping "default-k8s-diff-port-504903"...
	I0802 18:42:18.112340   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetState
	I0802 18:42:18.114258   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .Stop
	I0802 18:42:18.118254   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 0/120
	I0802 18:42:19.119710   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 1/120
	I0802 18:42:20.121104   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 2/120
	I0802 18:42:21.122495   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 3/120
	I0802 18:42:22.123883   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 4/120
	I0802 18:42:23.125240   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 5/120
	I0802 18:42:24.126582   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 6/120
	I0802 18:42:25.128107   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 7/120
	I0802 18:42:26.129451   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 8/120
	I0802 18:42:27.131118   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 9/120
	I0802 18:42:28.133184   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 10/120
	I0802 18:42:29.134395   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 11/120
	I0802 18:42:30.135923   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 12/120
	I0802 18:42:31.137741   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 13/120
	I0802 18:42:32.138759   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 14/120
	I0802 18:42:33.140419   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 15/120
	I0802 18:42:34.141934   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 16/120
	I0802 18:42:35.143070   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 17/120
	I0802 18:42:36.144535   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 18/120
	I0802 18:42:37.145740   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 19/120
	I0802 18:42:38.147557   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 20/120
	I0802 18:42:39.149539   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 21/120
	I0802 18:42:40.150800   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 22/120
	I0802 18:42:41.152244   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 23/120
	I0802 18:42:42.153617   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 24/120
	I0802 18:42:43.155344   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 25/120
	I0802 18:42:44.156858   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 26/120
	I0802 18:42:45.158188   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 27/120
	I0802 18:42:46.159608   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 28/120
	I0802 18:42:47.160778   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 29/120
	I0802 18:42:48.162734   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 30/120
	I0802 18:42:49.164056   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 31/120
	I0802 18:42:50.165300   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 32/120
	I0802 18:42:51.166673   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 33/120
	I0802 18:42:52.167849   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 34/120
	I0802 18:42:53.169407   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 35/120
	I0802 18:42:54.170745   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 36/120
	I0802 18:42:55.172157   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 37/120
	I0802 18:42:56.173532   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 38/120
	I0802 18:42:57.174963   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 39/120
	I0802 18:42:58.176933   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 40/120
	I0802 18:42:59.178226   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 41/120
	I0802 18:43:00.179870   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 42/120
	I0802 18:43:01.181959   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 43/120
	I0802 18:43:02.183293   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 44/120
	I0802 18:43:03.185153   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 45/120
	I0802 18:43:04.186606   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 46/120
	I0802 18:43:05.188015   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 47/120
	I0802 18:43:06.189353   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 48/120
	I0802 18:43:07.190938   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 49/120
	I0802 18:43:08.192258   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 50/120
	I0802 18:43:09.193516   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 51/120
	I0802 18:43:10.194809   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 52/120
	I0802 18:43:11.196137   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 53/120
	I0802 18:43:12.197797   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 54/120
	I0802 18:43:13.199518   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 55/120
	I0802 18:43:14.200885   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 56/120
	I0802 18:43:15.202212   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 57/120
	I0802 18:43:16.203575   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 58/120
	I0802 18:43:17.204929   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 59/120
	I0802 18:43:18.207022   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 60/120
	I0802 18:43:19.208675   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 61/120
	I0802 18:43:20.210086   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 62/120
	I0802 18:43:21.211508   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 63/120
	I0802 18:43:22.212805   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 64/120
	I0802 18:43:23.214832   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 65/120
	I0802 18:43:24.216347   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 66/120
	I0802 18:43:25.217822   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 67/120
	I0802 18:43:26.219186   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 68/120
	I0802 18:43:27.220610   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 69/120
	I0802 18:43:28.222978   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 70/120
	I0802 18:43:29.224367   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 71/120
	I0802 18:43:30.225620   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 72/120
	I0802 18:43:31.226987   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 73/120
	I0802 18:43:32.228212   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 74/120
	I0802 18:43:33.230036   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 75/120
	I0802 18:43:34.232031   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 76/120
	I0802 18:43:35.233439   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 77/120
	I0802 18:43:36.234768   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 78/120
	I0802 18:43:37.236282   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 79/120
	I0802 18:43:38.237566   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 80/120
	I0802 18:43:39.239477   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 81/120
	I0802 18:43:40.240691   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 82/120
	I0802 18:43:41.242178   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 83/120
	I0802 18:43:42.243423   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 84/120
	I0802 18:43:43.245542   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 85/120
	I0802 18:43:44.246902   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 86/120
	I0802 18:43:45.248194   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 87/120
	I0802 18:43:46.249835   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 88/120
	I0802 18:43:47.251261   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 89/120
	I0802 18:43:48.253690   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 90/120
	I0802 18:43:49.254928   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 91/120
	I0802 18:43:50.256598   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 92/120
	I0802 18:43:51.257972   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 93/120
	I0802 18:43:52.259439   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 94/120
	I0802 18:43:53.261361   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 95/120
	I0802 18:43:54.262680   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 96/120
	I0802 18:43:55.264408   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 97/120
	I0802 18:43:56.265871   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 98/120
	I0802 18:43:57.267093   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 99/120
	I0802 18:43:58.269105   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 100/120
	I0802 18:43:59.270433   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 101/120
	I0802 18:44:00.271803   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 102/120
	I0802 18:44:01.273505   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 103/120
	I0802 18:44:02.274891   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 104/120
	I0802 18:44:03.276920   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 105/120
	I0802 18:44:04.278366   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 106/120
	I0802 18:44:05.279594   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 107/120
	I0802 18:44:06.281845   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 108/120
	I0802 18:44:07.283148   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 109/120
	I0802 18:44:08.285272   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 110/120
	I0802 18:44:09.286579   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 111/120
	I0802 18:44:10.287968   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 112/120
	I0802 18:44:11.289918   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 113/120
	I0802 18:44:12.291404   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 114/120
	I0802 18:44:13.293628   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 115/120
	I0802 18:44:14.295170   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 116/120
	I0802 18:44:15.296838   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 117/120
	I0802 18:44:16.298421   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 118/120
	I0802 18:44:17.299826   57731 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for machine to stop 119/120
	I0802 18:44:18.300856   57731 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0802 18:44:18.300910   57731 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0802 18:44:18.302756   57731 out.go:177] 
	W0802 18:44:18.304225   57731 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0802 18:44:18.304239   57731 out.go:239] * 
	* 
	W0802 18:44:18.306744   57731 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:44:18.308074   57731 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-504903 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903: exit status 3 (18.579232551s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:44:36.891480   58639 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.183:22: connect: no route to host
	E0802 18:44:36.891505   58639 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.183:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-504903" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-490984 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-490984 create -f testdata/busybox.yaml: exit status 1 (42.631258ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-490984" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-490984 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 6 (204.594ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:42:33.308895   57840 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-490984" does not appear in /home/jenkins/minikube-integration/19355-5397/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-490984" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 6 (209.096141ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:42:33.519717   57870 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-490984" does not appear in /home/jenkins/minikube-integration/19355-5397/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-490984" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-490984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0802 18:42:43.927863   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-490984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.094974234s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-490984 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-490984 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-490984 describe deploy/metrics-server -n kube-system: exit status 1 (43.410908ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-490984" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-490984 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 6 (206.109234ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:44:09.864807   58455 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-490984" does not appear in /home/jenkins/minikube-integration/19355-5397/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-490984" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306: exit status 3 (3.167796528s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:43:42.459468   58197 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.168:22: connect: no route to host
	E0802 18:43:42.459486   58197 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.168:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-407306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-407306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155234966s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.168:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-407306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306: exit status 3 (3.060277735s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:43:51.675482   58277 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.168:22: connect: no route to host
	E0802 18:43:51.675499   58277 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.168:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-407306" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (361.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-407306 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-407306 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: exit status 80 (5m59.823721884s)

                                                
                                                
-- stdout --
	* [no-preload-407306] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "no-preload-407306" primary control-plane node in "no-preload-407306" cluster
	* Updating the running kvm2 "no-preload-407306" VM ...
	* Restarting existing kvm2 VM for "no-preload-407306" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:43:51.714250   58307 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:43:51.714482   58307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:43:51.714490   58307 out.go:304] Setting ErrFile to fd 2...
	I0802 18:43:51.714495   58307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:43:51.714682   58307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:43:51.715213   58307 out.go:298] Setting JSON to false
	I0802 18:43:51.716125   58307 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5176,"bootTime":1722619056,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:43:51.716177   58307 start.go:139] virtualization: kvm guest
	I0802 18:43:51.718490   58307 out.go:177] * [no-preload-407306] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:43:51.719834   58307 notify.go:220] Checking for updates...
	I0802 18:43:51.719857   58307 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:43:51.721207   58307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:43:51.722566   58307 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:43:51.723879   58307 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:43:51.725144   58307 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:43:51.726382   58307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:43:51.727958   58307 config.go:182] Loaded profile config "no-preload-407306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 18:43:51.728384   58307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:43:51.728457   58307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:43:51.743084   58307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37091
	I0802 18:43:51.743444   58307 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:43:51.743999   58307 main.go:141] libmachine: Using API Version  1
	I0802 18:43:51.744041   58307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:43:51.744351   58307 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:43:51.744555   58307 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	I0802 18:43:51.744813   58307 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:43:51.745092   58307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:43:51.745122   58307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:43:51.759416   58307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43325
	I0802 18:43:51.759773   58307 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:43:51.760154   58307 main.go:141] libmachine: Using API Version  1
	I0802 18:43:51.760172   58307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:43:51.760462   58307 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:43:51.760594   58307 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	I0802 18:43:51.795673   58307 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:43:51.796858   58307 start.go:297] selected driver: kvm2
	I0802 18:43:51.796874   58307 start.go:901] validating driver "kvm2" against &{Name:no-preload-407306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-rc.0 ClusterName:no-preload-407306 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:43:51.796965   58307 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:43:51.797907   58307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:43:51.798021   58307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:43:51.812746   58307 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:43:51.813138   58307 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:43:51.813171   58307 cni.go:84] Creating CNI manager for ""
	I0802 18:43:51.813186   58307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:43:51.813227   58307 start.go:340] cluster config:
	{Name:no-preload-407306 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-407306 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:43:51.813360   58307 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:43:51.815329   58307 out.go:177] * Starting "no-preload-407306" primary control-plane node in "no-preload-407306" cluster
	I0802 18:43:51.816596   58307 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0802 18:43:51.816733   58307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/no-preload-407306/config.json ...
	I0802 18:43:51.816866   58307 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0802 18:43:51.816936   58307 start.go:360] acquireMachinesLock for no-preload-407306: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:43:51.816982   58307 start.go:364] duration metric: took 26.785µs to acquireMachinesLock for "no-preload-407306"
	I0802 18:43:51.816998   58307 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:43:51.817012   58307 fix.go:54] fixHost starting: 
	I0802 18:43:51.817253   58307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:43:51.817283   58307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:43:51.832799   58307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0802 18:43:51.833203   58307 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:43:51.833727   58307 main.go:141] libmachine: Using API Version  1
	I0802 18:43:51.833758   58307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:43:51.834088   58307 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:43:51.834274   58307 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	I0802 18:43:51.834448   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetState
	I0802 18:43:51.836130   58307 fix.go:112] recreateIfNeeded on no-preload-407306: state=Running err=<nil>
	W0802 18:43:51.836149   58307 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:43:51.838008   58307 out.go:177] * Updating the running kvm2 "no-preload-407306" VM ...
	I0802 18:43:51.839391   58307 machine.go:94] provisionDockerMachine start ...
	I0802 18:43:51.839420   58307 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	I0802 18:43:51.839602   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:43:51.842331   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:43:51.842754   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:39:04 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:43:51.842787   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:43:51.842898   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:43:51.843055   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:43:51.843224   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:43:51.843339   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:43:51.843478   58307 main.go:141] libmachine: Using SSH client type: native
	I0802 18:43:51.843661   58307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0802 18:43:51.843676   58307 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:43:52.091695   58307 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0802 18:43:52.341896   58307 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0802 18:43:52.621311   58307 cache.go:107] acquiring lock: {Name:mk533ca5347055d768e21206f959fb399fb41416 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:43:52.621331   58307 cache.go:107] acquiring lock: {Name:mkeb004a80bcae6474864d4658308f1f5288cc33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:43:52.621355   58307 cache.go:107] acquiring lock: {Name:mkfe9659516d5b5d57e42d89e311bad1e14b2ad9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:43:52.621394   58307 cache.go:107] acquiring lock: {Name:mk58abe157c9b4548aa5d1a02e4fb8153bf49f0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:43:52.621405   58307 cache.go:107] acquiring lock: {Name:mk21696bcce9d9da949428af1fb41e87ff54ec7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:43:52.621434   58307 cache.go:115] /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0802 18:43:52.621434   58307 cache.go:115] /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0802 18:43:52.621446   58307 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 134.901µs
	I0802 18:43:52.621463   58307 cache.go:115] /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0802 18:43:52.621463   58307 cache.go:115] /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0802 18:43:52.621473   58307 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 82.144µs
	I0802 18:43:52.621474   58307 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 75.424µs
	I0802 18:43:52.621484   58307 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0802 18:43:52.621486   58307 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0802 18:43:52.621463   58307 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0802 18:43:52.621446   58307 cache.go:115] /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0802 18:43:52.621500   58307 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 203.5µs
	I0802 18:43:52.621507   58307 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0802 18:43:52.621476   58307 cache.go:107] acquiring lock: {Name:mk5e34aa8f31faf05136b9f0b42928fa13aa8201 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:43:52.621527   58307 cache.go:107] acquiring lock: {Name:mke7b719f1118a3cea620c2855b2474df2323d11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:43:52.621513   58307 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 119.048µs
	I0802 18:43:52.621561   58307 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0802 18:43:52.621540   58307 cache.go:107] acquiring lock: {Name:mk2f2694d2fc415350f8767b4c4d9d5eba615590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:43:52.621664   58307 cache.go:115] /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0802 18:43:52.621668   58307 cache.go:115] /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0802 18:43:52.621669   58307 cache.go:115] /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0802 18:43:52.621685   58307 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 207.931µs
	I0802 18:43:52.621679   58307 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 238.92µs
	I0802 18:43:52.621695   58307 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0802 18:43:52.621700   58307 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0802 18:43:52.621699   58307 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 210.659µs
	I0802 18:43:52.621708   58307 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0802 18:43:52.621720   58307 cache.go:87] Successfully saved all images to host disk.
	I0802 18:43:54.747568   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:43:57.819470   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:03.899378   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:06.971349   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:13.051363   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:16.123375   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:25.243348   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:28.315408   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:34.395387   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:37.467376   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:43.547322   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:46.619351   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:52.699370   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:44:55.771425   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:01.855329   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:04.927292   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:11.003347   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:14.075413   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:20.155366   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:23.227423   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:29.307378   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:32.379322   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:38.459366   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:41.531470   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:47.611379   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:50.683436   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:56.763459   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:59.835444   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:05.915421   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:08.987492   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:15.067402   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:18.139468   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:24.219465   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:27.291400   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:33.371373   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:36.443397   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:42.523396   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:45.595471   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:51.675409   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:54.747333   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:00.827402   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:03.899433   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:09.979389   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:13.051444   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:19.131393   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:22.203503   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:28.283424   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:31.355401   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:37.435418   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:40.507369   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:46.587351   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:49.659395   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:55.739425   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:58.811395   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:04.891407   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:07.963451   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:14.043353   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:17.115375   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:23.195400   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:26.267448   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:29.269466   58307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:48:29.269499   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetMachineName
	I0802 18:48:29.269822   58307 buildroot.go:166] provisioning hostname "no-preload-407306"
	I0802 18:48:29.269845   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetMachineName
	I0802 18:48:29.270039   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:48:29.271776   58307 machine.go:97] duration metric: took 4m37.432365616s to provisionDockerMachine
	I0802 18:48:29.271818   58307 fix.go:56] duration metric: took 4m37.454813328s for fixHost
	I0802 18:48:29.271824   58307 start.go:83] releasing machines lock for "no-preload-407306", held for 4m37.454832945s
	W0802 18:48:29.271843   58307 start.go:714] error starting host: provision: host is not running
	W0802 18:48:29.271939   58307 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	! StartHost failed, but will try again: provision: host is not running
	I0802 18:48:29.271947   58307 start.go:729] Will try again in 5 seconds ...
	I0802 18:48:34.274433   58307 start.go:360] acquireMachinesLock for no-preload-407306: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:49:31.740101   58307 start.go:364] duration metric: took 57.465618083s to acquireMachinesLock for "no-preload-407306"
	I0802 18:49:31.740149   58307 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:49:31.740160   58307 fix.go:54] fixHost starting: 
	I0802 18:49:31.740575   58307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:49:31.740609   58307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:49:31.759608   58307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42561
	I0802 18:49:31.760082   58307 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:49:31.760653   58307 main.go:141] libmachine: Using API Version  1
	I0802 18:49:31.760675   58307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:49:31.761037   58307 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:49:31.761211   58307 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	I0802 18:49:31.761366   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetState
	I0802 18:49:31.763383   58307 fix.go:112] recreateIfNeeded on no-preload-407306: state=Stopped err=<nil>
	I0802 18:49:31.763413   58307 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	W0802 18:49:31.763564   58307 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:49:31.765158   58307 out.go:177] * Restarting existing kvm2 VM for "no-preload-407306" ...
	I0802 18:49:31.766602   58307 main.go:141] libmachine: (no-preload-407306) Calling .Start
	I0802 18:49:31.770847   58307 main.go:141] libmachine: (no-preload-407306) Ensuring networks are active...
	I0802 18:49:31.771779   58307 main.go:141] libmachine: (no-preload-407306) Ensuring network default is active
	I0802 18:49:31.772235   58307 main.go:141] libmachine: (no-preload-407306) Ensuring network mk-no-preload-407306 is active
	I0802 18:49:31.772749   58307 main.go:141] libmachine: (no-preload-407306) Getting domain xml...
	I0802 18:49:31.773652   58307 main.go:141] libmachine: (no-preload-407306) Creating domain...
	I0802 18:49:33.116487   58307 main.go:141] libmachine: (no-preload-407306) Waiting to get IP...
	I0802 18:49:33.117278   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:33.117861   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:33.117917   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:33.117824   60523 retry.go:31] will retry after 299.393277ms: waiting for machine to come up
	I0802 18:49:33.419381   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:33.419900   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:33.419939   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:33.419867   60523 retry.go:31] will retry after 336.579779ms: waiting for machine to come up
	I0802 18:49:33.758538   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:33.758983   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:33.759017   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:33.758930   60523 retry.go:31] will retry after 381.841162ms: waiting for machine to come up
	I0802 18:49:34.142424   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:34.142991   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:34.143024   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:34.142948   60523 retry.go:31] will retry after 595.515127ms: waiting for machine to come up
	I0802 18:49:34.739739   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:34.740253   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:34.740285   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:34.740179   60523 retry.go:31] will retry after 645.87755ms: waiting for machine to come up
	I0802 18:49:35.388031   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:35.388494   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:35.388522   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:35.388460   60523 retry.go:31] will retry after 779.258683ms: waiting for machine to come up
	I0802 18:49:36.169313   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:36.169980   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:36.170008   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:36.169938   60523 retry.go:31] will retry after 786.851499ms: waiting for machine to come up
	I0802 18:49:36.958309   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:36.958802   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:36.958826   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:36.958763   60523 retry.go:31] will retry after 1.182844308s: waiting for machine to come up
	I0802 18:49:38.143070   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:38.143657   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:38.143691   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:38.143591   60523 retry.go:31] will retry after 1.210856616s: waiting for machine to come up
	I0802 18:49:39.356008   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:39.356449   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:39.356478   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:39.356411   60523 retry.go:31] will retry after 2.076557718s: waiting for machine to come up
	I0802 18:49:41.435125   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:41.435669   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:41.435701   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:41.435606   60523 retry.go:31] will retry after 2.608166994s: waiting for machine to come up
	I0802 18:49:44.045442   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:44.045840   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:44.045867   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:44.045792   60523 retry.go:31] will retry after 2.597008412s: waiting for machine to come up
	I0802 18:49:46.644314   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:46.644702   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:46.644727   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:46.644661   60523 retry.go:31] will retry after 3.905375169s: waiting for machine to come up
	I0802 18:49:50.552843   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.553201   58307 main.go:141] libmachine: (no-preload-407306) Found IP for machine: 192.168.39.168
	I0802 18:49:50.553221   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has current primary IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.553246   58307 main.go:141] libmachine: (no-preload-407306) Reserving static IP address...
	I0802 18:49:50.553676   58307 main.go:141] libmachine: (no-preload-407306) Reserved static IP address: 192.168.39.168
	I0802 18:49:50.553697   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "no-preload-407306", mac: "52:54:00:bd:56:69", ip: "192.168.39.168"} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.553704   58307 main.go:141] libmachine: (no-preload-407306) Waiting for SSH to be available...
	I0802 18:49:50.553723   58307 main.go:141] libmachine: (no-preload-407306) DBG | skip adding static IP to network mk-no-preload-407306 - found existing host DHCP lease matching {name: "no-preload-407306", mac: "52:54:00:bd:56:69", ip: "192.168.39.168"}
	I0802 18:49:50.553733   58307 main.go:141] libmachine: (no-preload-407306) DBG | Getting to WaitForSSH function...
	I0802 18:49:50.555684   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.556042   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.556070   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.556192   58307 main.go:141] libmachine: (no-preload-407306) DBG | Using SSH client type: external
	I0802 18:49:50.556215   58307 main.go:141] libmachine: (no-preload-407306) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/no-preload-407306/id_rsa (-rw-------)
	I0802 18:49:50.556245   58307 main.go:141] libmachine: (no-preload-407306) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/no-preload-407306/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 18:49:50.556264   58307 main.go:141] libmachine: (no-preload-407306) DBG | About to run SSH command:
	I0802 18:49:50.556280   58307 main.go:141] libmachine: (no-preload-407306) DBG | exit 0
	I0802 18:49:50.679205   58307 main.go:141] libmachine: (no-preload-407306) DBG | SSH cmd err, output: <nil>: 
	I0802 18:49:50.679609   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetConfigRaw
	I0802 18:49:50.680249   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetIP
	I0802 18:49:50.683007   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.683366   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.683396   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.683575   58307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/no-preload-407306/config.json ...
	I0802 18:49:50.683850   58307 machine.go:94] provisionDockerMachine start ...
	I0802 18:49:50.683881   58307 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	I0802 18:49:50.684087   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:50.686447   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.686816   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.686842   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.686981   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:49:50.687186   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.687393   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.687560   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:49:50.687758   58307 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:50.687913   58307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0802 18:49:50.687923   58307 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:49:50.791371   58307 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0802 18:49:50.791395   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetMachineName
	I0802 18:49:50.791626   58307 buildroot.go:166] provisioning hostname "no-preload-407306"
	I0802 18:49:50.791647   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetMachineName
	I0802 18:49:50.791861   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:50.794279   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.794606   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.794646   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.794774   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:49:50.794952   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.795070   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.795234   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:49:50.795399   58307 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:50.795615   58307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0802 18:49:50.795634   58307 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-407306 && echo "no-preload-407306" | sudo tee /etc/hostname
	I0802 18:49:50.911657   58307 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-407306
	
	I0802 18:49:50.911690   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:50.915360   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.915752   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.915776   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.915982   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:49:50.916222   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.916422   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.916590   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:49:50.916826   58307 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:50.917040   58307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0802 18:49:50.917067   58307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-407306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-407306/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-407306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:49:51.027987   58307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:49:51.028024   58307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:49:51.028057   58307 buildroot.go:174] setting up certificates
	I0802 18:49:51.028075   58307 provision.go:84] configureAuth start
	I0802 18:49:51.028089   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetMachineName
	I0802 18:49:51.028375   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetIP
	I0802 18:49:51.031265   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.031756   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:51.031794   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.031922   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:51.034918   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.035346   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:51.035372   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.035476   58307 provision.go:143] copyHostCerts
	I0802 18:49:51.035545   58307 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:49:51.035559   58307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:49:51.035627   58307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:49:51.035764   58307 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:49:51.035775   58307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:49:51.035812   58307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:49:51.035902   58307 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:49:51.035913   58307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:49:51.035942   58307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:49:51.036022   58307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.no-preload-407306 san=[127.0.0.1 192.168.39.168 localhost minikube no-preload-407306]
	I0802 18:49:51.168560   58307 provision.go:177] copyRemoteCerts
	I0802 18:49:51.168618   58307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:49:51.168644   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:51.171295   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.171647   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:51.171677   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.171833   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:49:51.172034   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:51.172211   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:49:51.172360   58307 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/no-preload-407306/id_rsa Username:docker}
	I0802 18:49:51.258043   58307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:49:51.280372   58307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0802 18:49:51.304010   58307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:49:51.327808   58307 provision.go:87] duration metric: took 299.720899ms to configureAuth
	I0802 18:49:51.327839   58307 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:49:51.328049   58307 config.go:182] Loaded profile config "no-preload-407306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 18:49:51.328146   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:51.330357   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.330649   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:51.330674   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.330856   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:49:51.331077   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:51.331298   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:51.331481   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:49:51.331682   58307 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:51.331870   58307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0802 18:49:51.331890   58307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:49:51.490654   58307 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0802 18:49:51.490694   58307 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0802 18:49:51.490702   58307 machine.go:97] duration metric: took 806.832499ms to provisionDockerMachine
	I0802 18:49:51.490726   58307 fix.go:56] duration metric: took 19.750567005s for fixHost
	I0802 18:49:51.490731   58307 start.go:83] releasing machines lock for "no-preload-407306", held for 19.750607176s
	W0802 18:49:51.490806   58307 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p no-preload-407306" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	* Failed to start kvm2 VM. Running "minikube delete -p no-preload-407306" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0802 18:49:51.493574   58307 out.go:177] 
	W0802 18:49:51.494913   58307 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0802 18:49:51.494934   58307 out.go:239] * 
	* 
	W0802 18:49:51.495792   58307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:49:51.497754   58307 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-407306 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306: exit status 2 (212.147309ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-407306 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:38 UTC |
	| start   | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | cert-options-643429 ssh                                | cert-options-643429          | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:38 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-643429 -- sudo                         | cert-options-643429          | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:38 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-643429                                 | cert-options-643429          | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:38 UTC |
	| start   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:38 UTC | 02 Aug 24 18:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:39 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:39 UTC | 02 Aug 24 18:45 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p cert-expiration-139745                              | cert-expiration-139745       | jenkins | v1.33.1 | 02 Aug 24 18:40 UTC | 02 Aug 24 18:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-139745                              | cert-expiration-139745       | jenkins | v1.33.1 | 02 Aug 24 18:40 UTC | 02 Aug 24 18:40 UTC |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:40 UTC | 02 Aug 24 18:42 UTC |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-407306             | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:41 UTC | 02 Aug 24 18:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-504903  | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:42 UTC | 02 Aug 24 18:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:42 UTC |                     |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-490984        | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407306                  | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490984             | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504903       | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:45 UTC |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:45:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:45:05.901232   59196 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:45:05.901490   59196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:45:05.901499   59196 out.go:304] Setting ErrFile to fd 2...
	I0802 18:45:05.901504   59196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:45:05.901674   59196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:45:05.902260   59196 out.go:298] Setting JSON to false
	I0802 18:45:05.903220   59196 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5250,"bootTime":1722619056,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:45:05.903291   59196 start.go:139] virtualization: kvm guest
	I0802 18:45:05.905411   59196 out.go:177] * [newest-cni-198962] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:45:05.906831   59196 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:45:05.906853   59196 notify.go:220] Checking for updates...
	I0802 18:45:05.909258   59196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:45:05.910562   59196 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:45:05.911781   59196 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:45:05.913007   59196 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:45:05.914316   59196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:45:05.916009   59196 config.go:182] Loaded profile config "default-k8s-diff-port-504903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:45:05.916112   59196 config.go:182] Loaded profile config "no-preload-407306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 18:45:05.916200   59196 config.go:182] Loaded profile config "old-k8s-version-490984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0802 18:45:05.916276   59196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:45:05.951915   59196 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 18:45:05.953087   59196 start.go:297] selected driver: kvm2
	I0802 18:45:05.953098   59196 start.go:901] validating driver "kvm2" against <nil>
	I0802 18:45:05.953108   59196 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:45:05.953815   59196 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:45:05.953892   59196 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:45:05.969243   59196 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:45:05.969288   59196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0802 18:45:05.969320   59196 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0802 18:45:05.969578   59196 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0802 18:45:05.969638   59196 cni.go:84] Creating CNI manager for ""
	I0802 18:45:05.969652   59196 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:45:05.969660   59196 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 18:45:05.969718   59196 start.go:340] cluster config:
	{Name:newest-cni-198962 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-198962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:45:05.969811   59196 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:45:05.971856   59196 out.go:177] * Starting "newest-cni-198962" primary control-plane node in "newest-cni-198962" cluster
	I0802 18:45:01.855329   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:04.927292   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:05.973089   59196 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0802 18:45:05.973130   59196 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0802 18:45:05.973142   59196 cache.go:56] Caching tarball of preloaded images
	I0802 18:45:05.973219   59196 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:45:05.973231   59196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0802 18:45:05.973333   59196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/config.json ...
	I0802 18:45:05.973356   59196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/config.json: {Name:mkdbce632a67527ed1284c8549701c2eaf5dd0f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:45:05.973513   59196 start.go:360] acquireMachinesLock for newest-cni-198962: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:45:11.003347   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:14.075413   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:20.155366   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:23.227423   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:29.307378   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:32.379322   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:38.459366   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:41.531470   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:47.611379   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:50.683436   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:56.763459   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:45:59.835444   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:05.915421   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:08.987492   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:15.067402   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:18.139468   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:24.219465   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:27.291400   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:33.371373   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:36.443397   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:42.523396   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:45.595471   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:51.675409   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:46:54.747333   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:00.827402   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:03.899433   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:09.979389   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:13.051444   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:19.131393   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:22.203503   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:28.283424   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:31.355401   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:37.435418   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:40.507369   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:46.587351   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:49.659395   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:55.739425   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:47:58.811395   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:04.891407   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:07.963451   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:14.043353   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:17.115375   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:23.195400   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:26.267448   58307 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.168:22: connect: no route to host
	I0802 18:48:29.271902   58571 start.go:364] duration metric: took 4m17.747886721s to acquireMachinesLock for "old-k8s-version-490984"
	I0802 18:48:29.271958   58571 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:48:29.271963   58571 fix.go:54] fixHost starting: 
	I0802 18:48:29.272266   58571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:48:29.272294   58571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:48:29.287602   58571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
	I0802 18:48:29.288060   58571 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:48:29.288528   58571 main.go:141] libmachine: Using API Version  1
	I0802 18:48:29.288545   58571 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:48:29.288857   58571 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:48:29.289034   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:29.289174   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetState
	I0802 18:48:29.291032   58571 fix.go:112] recreateIfNeeded on old-k8s-version-490984: state=Stopped err=<nil>
	I0802 18:48:29.291053   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	W0802 18:48:29.291239   58571 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:48:29.293011   58571 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-490984" ...
	I0802 18:48:29.294248   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .Start
	I0802 18:48:29.294418   58571 main.go:141] libmachine: (old-k8s-version-490984) Ensuring networks are active...
	I0802 18:48:29.295224   58571 main.go:141] libmachine: (old-k8s-version-490984) Ensuring network default is active
	I0802 18:48:29.295661   58571 main.go:141] libmachine: (old-k8s-version-490984) Ensuring network mk-old-k8s-version-490984 is active
	I0802 18:48:29.296018   58571 main.go:141] libmachine: (old-k8s-version-490984) Getting domain xml...
	I0802 18:48:29.296974   58571 main.go:141] libmachine: (old-k8s-version-490984) Creating domain...
	I0802 18:48:30.503712   58571 main.go:141] libmachine: (old-k8s-version-490984) Waiting to get IP...
	I0802 18:48:30.504530   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:30.504922   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:30.504996   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:30.504910   59906 retry.go:31] will retry after 307.580681ms: waiting for machine to come up
	I0802 18:48:30.814553   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:30.814985   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:30.815020   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:30.814914   59906 retry.go:31] will retry after 243.906736ms: waiting for machine to come up
	I0802 18:48:31.060406   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:31.060854   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:31.060880   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:31.060820   59906 retry.go:31] will retry after 392.162755ms: waiting for machine to come up
	I0802 18:48:29.269466   58307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:48:29.269499   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetMachineName
	I0802 18:48:29.269822   58307 buildroot.go:166] provisioning hostname "no-preload-407306"
	I0802 18:48:29.269845   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetMachineName
	I0802 18:48:29.270039   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:48:29.271776   58307 machine.go:97] duration metric: took 4m37.432365616s to provisionDockerMachine
	I0802 18:48:29.271818   58307 fix.go:56] duration metric: took 4m37.454813328s for fixHost
	I0802 18:48:29.271824   58307 start.go:83] releasing machines lock for "no-preload-407306", held for 4m37.454832945s
	W0802 18:48:29.271843   58307 start.go:714] error starting host: provision: host is not running
	W0802 18:48:29.271939   58307 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0802 18:48:29.271947   58307 start.go:729] Will try again in 5 seconds ...
	I0802 18:48:31.454321   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:31.454706   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:31.454733   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:31.454658   59906 retry.go:31] will retry after 424.820425ms: waiting for machine to come up
	I0802 18:48:31.881487   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:31.881988   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:31.882111   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:31.881954   59906 retry.go:31] will retry after 460.627573ms: waiting for machine to come up
	I0802 18:48:32.344538   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:32.344949   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:32.344978   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:32.344903   59906 retry.go:31] will retry after 589.234832ms: waiting for machine to come up
	I0802 18:48:32.935791   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:32.936157   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:32.936178   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:32.936141   59906 retry.go:31] will retry after 1.009164478s: waiting for machine to come up
	I0802 18:48:33.947364   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:33.947865   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:33.947888   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:33.947816   59906 retry.go:31] will retry after 1.052111058s: waiting for machine to come up
	I0802 18:48:35.001504   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:35.001985   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:35.002018   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:35.001932   59906 retry.go:31] will retry after 1.343846495s: waiting for machine to come up
	I0802 18:48:36.347528   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:36.347869   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:36.347921   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:36.347855   59906 retry.go:31] will retry after 1.919219744s: waiting for machine to come up
	I0802 18:48:34.274433   58307 start.go:360] acquireMachinesLock for no-preload-407306: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:48:38.269875   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:38.270312   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:38.270341   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:38.270293   59906 retry.go:31] will retry after 2.307222377s: waiting for machine to come up
	I0802 18:48:40.579469   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:40.579904   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:40.579936   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:40.579851   59906 retry.go:31] will retry after 2.436290529s: waiting for machine to come up
	I0802 18:48:43.019426   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:43.019804   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:43.019843   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:43.019767   59906 retry.go:31] will retry after 3.69539651s: waiting for machine to come up
	I0802 18:48:48.011672   58864 start.go:364] duration metric: took 3m58.585165435s to acquireMachinesLock for "default-k8s-diff-port-504903"
	I0802 18:48:48.011733   58864 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:48:48.011741   58864 fix.go:54] fixHost starting: 
	I0802 18:48:48.012157   58864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:48:48.012192   58864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:48:48.028998   58864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38363
	I0802 18:48:48.029355   58864 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:48:48.029804   58864 main.go:141] libmachine: Using API Version  1
	I0802 18:48:48.029831   58864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:48:48.030196   58864 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:48:48.030379   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:48:48.030549   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetState
	I0802 18:48:48.032135   58864 fix.go:112] recreateIfNeeded on default-k8s-diff-port-504903: state=Stopped err=<nil>
	I0802 18:48:48.032162   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	W0802 18:48:48.032319   58864 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:48:48.034521   58864 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-504903" ...
	I0802 18:48:48.036053   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .Start
	I0802 18:48:48.036232   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Ensuring networks are active...
	I0802 18:48:48.037149   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Ensuring network default is active
	I0802 18:48:48.037521   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Ensuring network mk-default-k8s-diff-port-504903 is active
	I0802 18:48:48.038093   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Getting domain xml...
	I0802 18:48:48.038939   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Creating domain...
	I0802 18:48:49.318018   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting to get IP...
	I0802 18:48:49.319174   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:46.717837   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.718393   58571 main.go:141] libmachine: (old-k8s-version-490984) Found IP for machine: 192.168.50.104
	I0802 18:48:46.718419   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has current primary IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.718431   58571 main.go:141] libmachine: (old-k8s-version-490984) Reserving static IP address...
	I0802 18:48:46.718839   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "old-k8s-version-490984", mac: "52:54:00:e1:cb:7a", ip: "192.168.50.104"} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:46.718865   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | skip adding static IP to network mk-old-k8s-version-490984 - found existing host DHCP lease matching {name: "old-k8s-version-490984", mac: "52:54:00:e1:cb:7a", ip: "192.168.50.104"}
	I0802 18:48:46.718875   58571 main.go:141] libmachine: (old-k8s-version-490984) Reserved static IP address: 192.168.50.104
	I0802 18:48:46.718889   58571 main.go:141] libmachine: (old-k8s-version-490984) Waiting for SSH to be available...
	I0802 18:48:46.718898   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | Getting to WaitForSSH function...
	I0802 18:48:46.720922   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.721259   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:46.721296   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.721420   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | Using SSH client type: external
	I0802 18:48:46.721445   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa (-rw-------)
	I0802 18:48:46.721482   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 18:48:46.721546   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | About to run SSH command:
	I0802 18:48:46.721568   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | exit 0
	I0802 18:48:46.842782   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | SSH cmd err, output: <nil>: 
	I0802 18:48:46.843151   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetConfigRaw
	I0802 18:48:46.843733   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:48:46.846029   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.846320   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:46.846348   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.846618   58571 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/config.json ...
	I0802 18:48:46.846797   58571 machine.go:94] provisionDockerMachine start ...
	I0802 18:48:46.846814   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:46.847004   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:46.849141   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.849499   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:46.849523   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.849670   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:46.849858   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:46.849992   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:46.850123   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:46.850301   58571 main.go:141] libmachine: Using SSH client type: native
	I0802 18:48:46.850484   58571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:48:46.850495   58571 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:48:46.947427   58571 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0802 18:48:46.947456   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetMachineName
	I0802 18:48:46.947690   58571 buildroot.go:166] provisioning hostname "old-k8s-version-490984"
	I0802 18:48:46.947726   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetMachineName
	I0802 18:48:46.947927   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:46.950710   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.951067   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:46.951094   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.951396   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:46.951565   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:46.951738   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:46.951887   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:46.952038   58571 main.go:141] libmachine: Using SSH client type: native
	I0802 18:48:46.952217   58571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:48:46.952229   58571 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-490984 && echo "old-k8s-version-490984" | sudo tee /etc/hostname
	I0802 18:48:47.060408   58571 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-490984
	
	I0802 18:48:47.060435   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.063083   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.063461   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.063492   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.063610   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:47.063787   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.063934   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.064129   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:47.064331   58571 main.go:141] libmachine: Using SSH client type: native
	I0802 18:48:47.064502   58571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:48:47.064518   58571 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-490984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-490984/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-490984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:48:47.166680   58571 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:48:47.166724   58571 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:48:47.166749   58571 buildroot.go:174] setting up certificates
	I0802 18:48:47.166759   58571 provision.go:84] configureAuth start
	I0802 18:48:47.166770   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetMachineName
	I0802 18:48:47.167085   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:48:47.169842   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.170244   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.170279   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.170424   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.172587   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.172942   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.172972   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.173081   58571 provision.go:143] copyHostCerts
	I0802 18:48:47.173130   58571 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:48:47.173142   58571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:48:47.173210   58571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:48:47.173305   58571 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:48:47.173313   58571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:48:47.173339   58571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:48:47.173408   58571 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:48:47.173416   58571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:48:47.173438   58571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:48:47.173504   58571 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-490984 san=[127.0.0.1 192.168.50.104 localhost minikube old-k8s-version-490984]
	I0802 18:48:47.397577   58571 provision.go:177] copyRemoteCerts
	I0802 18:48:47.397633   58571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:48:47.397657   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.400444   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.400761   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.400789   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.400911   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:47.401126   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.401305   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:47.401451   58571 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:48:47.477120   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:48:47.499051   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:48:47.520431   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0802 18:48:47.541493   58571 provision.go:87] duration metric: took 374.722098ms to configureAuth
	I0802 18:48:47.541523   58571 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:48:47.541731   58571 config.go:182] Loaded profile config "old-k8s-version-490984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0802 18:48:47.541819   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.544555   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.544903   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.544939   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.545047   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:47.545256   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.545421   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.545543   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:47.545707   58571 main.go:141] libmachine: Using SSH client type: native
	I0802 18:48:47.545852   58571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:48:47.545866   58571 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:48:47.793135   58571 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:48:47.793168   58571 machine.go:97] duration metric: took 946.358268ms to provisionDockerMachine
	I0802 18:48:47.793188   58571 start.go:293] postStartSetup for "old-k8s-version-490984" (driver="kvm2")
	I0802 18:48:47.793200   58571 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:48:47.793239   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:47.793602   58571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:48:47.793631   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.796301   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.796747   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.796774   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.796984   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:47.797205   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.797478   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:47.797638   58571 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:48:47.877244   58571 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:48:47.881119   58571 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:48:47.881157   58571 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:48:47.881235   58571 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:48:47.881321   58571 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:48:47.881417   58571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:48:47.889970   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:48:47.911725   58571 start.go:296] duration metric: took 118.525715ms for postStartSetup
	I0802 18:48:47.911765   58571 fix.go:56] duration metric: took 18.639800216s for fixHost
	I0802 18:48:47.911788   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.914229   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.914507   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.914536   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.914715   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:47.914932   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.915093   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.915283   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:47.915426   58571 main.go:141] libmachine: Using SSH client type: native
	I0802 18:48:47.915597   58571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:48:47.915607   58571 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 18:48:48.011471   58571 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722624527.988954809
	
	I0802 18:48:48.011501   58571 fix.go:216] guest clock: 1722624527.988954809
	I0802 18:48:48.011513   58571 fix.go:229] Guest: 2024-08-02 18:48:47.988954809 +0000 UTC Remote: 2024-08-02 18:48:47.911770242 +0000 UTC m=+276.540714762 (delta=77.184567ms)
	I0802 18:48:48.011550   58571 fix.go:200] guest clock delta is within tolerance: 77.184567ms
	I0802 18:48:48.011558   58571 start.go:83] releasing machines lock for "old-k8s-version-490984", held for 18.739614915s
	I0802 18:48:48.011590   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:48.011904   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:48:48.014631   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.015163   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:48.015195   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.015325   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:48.015902   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:48.016099   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:48.016197   58571 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:48:48.016241   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:48.016326   58571 ssh_runner.go:195] Run: cat /version.json
	I0802 18:48:48.016354   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:48.019187   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.019391   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.019565   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:48.019588   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.019733   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:48.019794   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:48.019803   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.019935   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:48.020016   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:48.020077   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:48.020180   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:48.020268   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:48.020334   58571 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:48:48.020408   58571 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:48:48.125009   58571 ssh_runner.go:195] Run: systemctl --version
	I0802 18:48:48.130495   58571 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:48:48.274836   58571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:48:48.280446   58571 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:48:48.280517   58571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:48:48.295198   58571 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 18:48:48.295222   58571 start.go:495] detecting cgroup driver to use...
	I0802 18:48:48.295294   58571 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:48:48.310716   58571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:48:48.324219   58571 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:48:48.324275   58571 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:48:48.337583   58571 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:48:48.350509   58571 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:48:48.457711   58571 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:48:48.613498   58571 docker.go:233] disabling docker service ...
	I0802 18:48:48.613584   58571 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:48:48.630221   58571 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:48:48.642385   58571 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:48:48.781056   58571 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:48:48.924495   58571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:48:48.938824   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:48:48.956224   58571 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0802 18:48:48.956315   58571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:48:48.966431   58571 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:48:48.966508   58571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:48:48.977309   58571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:48:48.987155   58571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:48:48.997040   58571 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:48:49.007582   58571 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:48:49.017581   58571 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 18:48:49.017641   58571 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 18:48:49.029876   58571 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:48:49.040020   58571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:48:49.155163   58571 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:48:49.289885   58571 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:48:49.289961   58571 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:48:49.295125   58571 start.go:563] Will wait 60s for crictl version
	I0802 18:48:49.295185   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:49.298824   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:48:49.334988   58571 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:48:49.335088   58571 ssh_runner.go:195] Run: crio --version
	I0802 18:48:49.362449   58571 ssh_runner.go:195] Run: crio --version
	I0802 18:48:49.390675   58571 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0802 18:48:49.391954   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:48:49.395185   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:49.395560   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:49.395612   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:49.395840   58571 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0802 18:48:49.399621   58571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:48:49.411028   58571 kubeadm.go:883] updating cluster {Name:old-k8s-version-490984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:48:49.411196   58571 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0802 18:48:49.411332   58571 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:48:49.458890   58571 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0802 18:48:49.458956   58571 ssh_runner.go:195] Run: which lz4
	I0802 18:48:49.462789   58571 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 18:48:49.466642   58571 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 18:48:49.466682   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0802 18:48:50.914994   58571 crio.go:462] duration metric: took 1.452253234s to copy over tarball
	I0802 18:48:50.915068   58571 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 18:48:49.319722   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:49.319788   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:49.319686   60051 retry.go:31] will retry after 236.497556ms: waiting for machine to come up
	I0802 18:48:49.558143   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:49.558616   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:49.558693   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:49.558582   60051 retry.go:31] will retry after 238.000274ms: waiting for machine to come up
	I0802 18:48:49.798203   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:49.798674   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:49.798697   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:49.798643   60051 retry.go:31] will retry after 317.686436ms: waiting for machine to come up
	I0802 18:48:50.117885   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:50.118420   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:50.118452   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:50.118363   60051 retry.go:31] will retry after 463.73535ms: waiting for machine to come up
	I0802 18:48:50.584264   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:50.584808   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:50.584849   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:50.584775   60051 retry.go:31] will retry after 622.935297ms: waiting for machine to come up
	I0802 18:48:51.209196   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:51.209585   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:51.209615   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:51.209528   60051 retry.go:31] will retry after 618.911618ms: waiting for machine to come up
	I0802 18:48:51.830455   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:51.830967   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:51.831001   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:51.830902   60051 retry.go:31] will retry after 970.179831ms: waiting for machine to come up
	I0802 18:48:52.802193   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:52.802638   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:52.802666   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:52.802570   60051 retry.go:31] will retry after 1.088398353s: waiting for machine to come up
	I0802 18:48:53.891948   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:53.892296   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:53.892329   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:53.892272   60051 retry.go:31] will retry after 1.821835645s: waiting for machine to come up
	I0802 18:48:53.707251   58571 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.792154194s)
	I0802 18:48:53.707284   58571 crio.go:469] duration metric: took 2.792264852s to extract the tarball
	I0802 18:48:53.707294   58571 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 18:48:53.749509   58571 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:48:53.784343   58571 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0802 18:48:53.784368   58571 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0802 18:48:53.784448   58571 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:48:53.784471   58571 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:48:53.784506   58571 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0802 18:48:53.784530   58571 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:48:53.784555   58571 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:48:53.784504   58571 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:48:53.784511   58571 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:48:53.784471   58571 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0802 18:48:53.786203   58571 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:48:53.786215   58571 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:48:53.786238   58571 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:48:53.786242   58571 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:48:53.786209   58571 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:48:53.786266   58571 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0802 18:48:53.786286   58571 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0802 18:48:53.786309   58571 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:48:54.020645   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0802 18:48:54.055338   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:48:54.060117   58571 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0802 18:48:54.060168   58571 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0802 18:48:54.060212   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.064500   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:48:54.074234   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:48:54.077297   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:48:54.090758   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0802 18:48:54.100361   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0802 18:48:54.118683   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0802 18:48:54.118733   58571 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0802 18:48:54.118769   58571 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:48:54.118810   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.175733   58571 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0802 18:48:54.175785   58571 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:48:54.175839   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.202446   58571 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0802 18:48:54.202499   58571 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:48:54.202501   58571 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0802 18:48:54.202540   58571 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:48:54.202552   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.202580   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.238954   58571 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0802 18:48:54.238998   58571 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:48:54.239020   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:48:54.239046   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.239019   58571 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0802 18:48:54.239150   58571 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0802 18:48:54.239179   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.246523   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0802 18:48:54.246560   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:48:54.246592   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:48:54.246629   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:48:54.251430   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0802 18:48:54.341115   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0802 18:48:54.341176   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0802 18:48:54.353210   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0802 18:48:54.357711   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0802 18:48:54.357793   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0802 18:48:54.357830   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0802 18:48:54.377314   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0802 18:48:54.667926   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:48:54.810673   58571 cache_images.go:92] duration metric: took 1.026282543s to LoadCachedImages
	W0802 18:48:54.810786   58571 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0802 18:48:54.810860   58571 kubeadm.go:934] updating node { 192.168.50.104 8443 v1.20.0 crio true true} ...
	I0802 18:48:54.811043   58571 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-490984 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:48:54.811172   58571 ssh_runner.go:195] Run: crio config
	I0802 18:48:54.858477   58571 cni.go:84] Creating CNI manager for ""
	I0802 18:48:54.858501   58571 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:48:54.858513   58571 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:48:54.858548   58571 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-490984 NodeName:old-k8s-version-490984 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0802 18:48:54.858702   58571 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-490984"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:48:54.858783   58571 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0802 18:48:54.868766   58571 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:48:54.868846   58571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:48:54.878136   58571 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0802 18:48:54.894844   58571 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:48:54.910396   58571 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0802 18:48:54.929209   58571 ssh_runner.go:195] Run: grep 192.168.50.104	control-plane.minikube.internal$ /etc/hosts
	I0802 18:48:54.932947   58571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:48:54.946404   58571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:48:55.063040   58571 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:48:55.083216   58571 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984 for IP: 192.168.50.104
	I0802 18:48:55.083252   58571 certs.go:194] generating shared ca certs ...
	I0802 18:48:55.083274   58571 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:48:55.083478   58571 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:48:55.083544   58571 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:48:55.083564   58571 certs.go:256] generating profile certs ...
	I0802 18:48:55.083692   58571 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/client.key
	I0802 18:48:55.083785   58571 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.key.64198073
	I0802 18:48:55.083847   58571 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.key
	I0802 18:48:55.084009   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:48:55.084066   58571 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:48:55.084083   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:48:55.084124   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:48:55.084162   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:48:55.084199   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:48:55.084267   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:48:55.084999   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:48:55.128252   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:48:55.163809   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:48:55.190126   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:48:55.219164   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0802 18:48:55.247315   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 18:48:55.297162   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:48:55.326070   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 18:48:55.349221   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:48:55.371877   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:48:55.394715   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:48:55.417601   58571 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:48:55.433829   58571 ssh_runner.go:195] Run: openssl version
	I0802 18:48:55.439490   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:48:55.449897   58571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:48:55.454201   58571 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:48:55.454259   58571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:48:55.459982   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:48:55.469959   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:48:55.480093   58571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:48:55.484484   58571 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:48:55.484558   58571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:48:55.489763   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:48:55.500296   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:48:55.510694   58571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:48:55.515067   58571 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:48:55.515154   58571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:48:55.521358   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:48:55.531311   58571 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:48:55.536083   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 18:48:55.541867   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 18:48:55.547185   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 18:48:55.552672   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 18:48:55.557817   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 18:48:55.563287   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 18:48:55.568597   58571 kubeadm.go:392] StartCluster: {Name:old-k8s-version-490984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:48:55.568699   58571 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:48:55.568749   58571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:48:55.612416   58571 cri.go:89] found id: ""
	I0802 18:48:55.612487   58571 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 18:48:55.621919   58571 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0802 18:48:55.621938   58571 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0802 18:48:55.621977   58571 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0802 18:48:55.630826   58571 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0802 18:48:55.631493   58571 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-490984" does not appear in /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:48:55.631838   58571 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-5397/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-490984" cluster setting kubeconfig missing "old-k8s-version-490984" context setting]
	I0802 18:48:55.632363   58571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:48:55.634644   58571 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0802 18:48:55.643386   58571 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.104
	I0802 18:48:55.643416   58571 kubeadm.go:1160] stopping kube-system containers ...
	I0802 18:48:55.643429   58571 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0802 18:48:55.643488   58571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:48:55.676501   58571 cri.go:89] found id: ""
	I0802 18:48:55.676577   58571 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0802 18:48:55.692747   58571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:48:55.701664   58571 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:48:55.701686   58571 kubeadm.go:157] found existing configuration files:
	
	I0802 18:48:55.701734   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:48:55.710027   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:48:55.710079   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:48:55.719120   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:48:55.727623   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:48:55.727667   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:48:55.736204   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:48:55.744564   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:48:55.744641   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:48:55.753239   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:48:55.761560   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:48:55.761613   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:48:55.770368   58571 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 18:48:55.779598   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:48:55.893533   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:48:55.716314   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:55.716737   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:55.716765   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:55.716689   60051 retry.go:31] will retry after 2.151408924s: waiting for machine to come up
	I0802 18:48:57.871119   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:57.871599   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:57.871627   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:57.871539   60051 retry.go:31] will retry after 1.759073614s: waiting for machine to come up
	I0802 18:48:56.864800   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:48:57.089710   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:48:57.184779   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:48:57.268095   58571 api_server.go:52] waiting for apiserver process to appear ...
	I0802 18:48:57.268190   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:48:57.768972   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:48:58.268488   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:48:58.768518   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:48:59.269207   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:48:59.768438   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:00.269117   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:00.768397   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:01.269091   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:48:59.632243   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:48:59.632810   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:48:59.632842   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:48:59.632747   60051 retry.go:31] will retry after 3.51400395s: waiting for machine to come up
	I0802 18:49:03.148757   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:03.149211   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | unable to find current IP address of domain default-k8s-diff-port-504903 in network mk-default-k8s-diff-port-504903
	I0802 18:49:03.149250   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | I0802 18:49:03.149131   60051 retry.go:31] will retry after 3.05565183s: waiting for machine to come up
	I0802 18:49:01.769121   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:02.268891   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:02.768679   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:03.269000   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:03.768285   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:04.268702   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:04.768630   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:05.269090   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:05.768354   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:06.268502   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:07.699544   59196 start.go:364] duration metric: took 4m1.725982243s to acquireMachinesLock for "newest-cni-198962"
	I0802 18:49:07.700015   59196 start.go:93] Provisioning new machine with config: &{Name:newest-cni-198962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-198962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 18:49:07.700264   59196 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 18:49:06.208479   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.208944   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Found IP for machine: 192.168.61.183
	I0802 18:49:06.208967   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has current primary IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.208975   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Reserving static IP address...
	I0802 18:49:06.209298   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-504903", mac: "52:54:00:83:0f:3b", ip: "192.168.61.183"} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:06.209324   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | skip adding static IP to network mk-default-k8s-diff-port-504903 - found existing host DHCP lease matching {name: "default-k8s-diff-port-504903", mac: "52:54:00:83:0f:3b", ip: "192.168.61.183"}
	I0802 18:49:06.209346   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Reserved static IP address: 192.168.61.183
	I0802 18:49:06.209360   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Waiting for SSH to be available...
	I0802 18:49:06.209369   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | Getting to WaitForSSH function...
	I0802 18:49:06.212248   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.212648   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:06.212679   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.212830   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | Using SSH client type: external
	I0802 18:49:06.212858   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/default-k8s-diff-port-504903/id_rsa (-rw-------)
	I0802 18:49:06.212891   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.183 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/default-k8s-diff-port-504903/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 18:49:06.212911   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | About to run SSH command:
	I0802 18:49:06.212928   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | exit 0
	I0802 18:49:06.342987   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | SSH cmd err, output: <nil>: 
	I0802 18:49:06.343534   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetConfigRaw
	I0802 18:49:06.344210   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetIP
	I0802 18:49:06.346964   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.347301   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:06.347335   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.347580   58864 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/config.json ...
	I0802 18:49:06.347751   58864 machine.go:94] provisionDockerMachine start ...
	I0802 18:49:06.347767   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:49:06.347955   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:06.350326   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.350652   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:06.350676   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.350821   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:06.351060   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:06.351253   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:06.351394   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:06.351542   58864 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:06.351744   58864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0802 18:49:06.351756   58864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:49:06.459135   58864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0802 18:49:06.459168   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetMachineName
	I0802 18:49:06.459392   58864 buildroot.go:166] provisioning hostname "default-k8s-diff-port-504903"
	I0802 18:49:06.459436   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetMachineName
	I0802 18:49:06.459641   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:06.462368   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.462745   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:06.462768   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.462878   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:06.463047   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:06.463178   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:06.463289   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:06.463416   58864 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:06.463587   58864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0802 18:49:06.463605   58864 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-504903 && echo "default-k8s-diff-port-504903" | sudo tee /etc/hostname
	I0802 18:49:06.586183   58864 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-504903
	
	I0802 18:49:06.586209   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:06.588970   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.589374   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:06.589421   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.589579   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:06.589760   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:06.589942   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:06.590117   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:06.590287   58864 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:06.590448   58864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0802 18:49:06.590464   58864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-504903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-504903/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-504903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:49:06.712488   58864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:49:06.712530   58864 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:49:06.712583   58864 buildroot.go:174] setting up certificates
	I0802 18:49:06.712593   58864 provision.go:84] configureAuth start
	I0802 18:49:06.712603   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetMachineName
	I0802 18:49:06.712926   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetIP
	I0802 18:49:06.715711   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.716166   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:06.716198   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.716348   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:06.718832   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.719217   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:06.719249   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:06.719423   58864 provision.go:143] copyHostCerts
	I0802 18:49:06.719486   58864 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:49:06.719498   58864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:49:06.719571   58864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:49:06.719690   58864 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:49:06.719695   58864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:49:06.719720   58864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:49:06.719777   58864 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:49:06.719784   58864 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:49:06.719802   58864 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:49:06.719850   58864 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-504903 san=[127.0.0.1 192.168.61.183 default-k8s-diff-port-504903 localhost minikube]
	I0802 18:49:07.050414   58864 provision.go:177] copyRemoteCerts
	I0802 18:49:07.050468   58864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:49:07.050506   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:07.053214   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.053510   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:07.053541   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.053664   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:07.053873   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:07.054011   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:07.054143   58864 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/default-k8s-diff-port-504903/id_rsa Username:docker}
	I0802 18:49:07.136757   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0802 18:49:07.158836   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 18:49:07.180394   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:49:07.201516   58864 provision.go:87] duration metric: took 488.903212ms to configureAuth
	I0802 18:49:07.201547   58864 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:49:07.201777   58864 config.go:182] Loaded profile config "default-k8s-diff-port-504903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:49:07.201880   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:07.204945   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.205273   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:07.205304   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.205456   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:07.205734   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:07.205932   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:07.206098   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:07.206261   58864 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:07.206425   58864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0802 18:49:07.206441   58864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:49:07.465013   58864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:49:07.465048   58864 machine.go:97] duration metric: took 1.117284443s to provisionDockerMachine
	I0802 18:49:07.465064   58864 start.go:293] postStartSetup for "default-k8s-diff-port-504903" (driver="kvm2")
	I0802 18:49:07.465080   58864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:49:07.465101   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:49:07.465444   58864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:49:07.465481   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:07.467974   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.468260   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:07.468282   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.468405   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:07.468582   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:07.468733   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:07.468838   58864 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/default-k8s-diff-port-504903/id_rsa Username:docker}
	I0802 18:49:07.553251   58864 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:49:07.557016   58864 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:49:07.557037   58864 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:49:07.557099   58864 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:49:07.557168   58864 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:49:07.557252   58864 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:49:07.566043   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:49:07.587497   58864 start.go:296] duration metric: took 122.416306ms for postStartSetup
	I0802 18:49:07.587534   58864 fix.go:56] duration metric: took 19.575794571s for fixHost
	I0802 18:49:07.587562   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:07.590582   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.590928   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:07.590955   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.591121   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:07.591328   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:07.591521   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:07.591689   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:07.591865   58864 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:07.592054   58864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I0802 18:49:07.592068   58864 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 18:49:07.699365   58864 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722624547.675039881
	
	I0802 18:49:07.699389   58864 fix.go:216] guest clock: 1722624547.675039881
	I0802 18:49:07.699398   58864 fix.go:229] Guest: 2024-08-02 18:49:07.675039881 +0000 UTC Remote: 2024-08-02 18:49:07.587537623 +0000 UTC m=+258.304415053 (delta=87.502258ms)
	I0802 18:49:07.699445   58864 fix.go:200] guest clock delta is within tolerance: 87.502258ms
	I0802 18:49:07.699450   58864 start.go:83] releasing machines lock for "default-k8s-diff-port-504903", held for 19.68773979s
	I0802 18:49:07.699482   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:49:07.699776   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetIP
	I0802 18:49:07.702861   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.703378   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:07.703407   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.703592   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:49:07.704304   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:49:07.704496   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:49:07.704605   58864 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:49:07.704651   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:07.704771   58864 ssh_runner.go:195] Run: cat /version.json
	I0802 18:49:07.704792   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:07.707207   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.707542   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.707586   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:07.707610   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.707735   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:07.707919   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:07.707991   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:07.708018   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:07.708097   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:07.708178   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:07.708269   58864 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/default-k8s-diff-port-504903/id_rsa Username:docker}
	I0802 18:49:07.708328   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:07.708490   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:07.708650   58864 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/default-k8s-diff-port-504903/id_rsa Username:docker}
	I0802 18:49:07.825474   58864 ssh_runner.go:195] Run: systemctl --version
	I0802 18:49:07.831941   58864 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:49:07.976387   58864 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:49:07.982599   58864 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:49:07.982680   58864 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:49:08.000948   58864 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 18:49:08.000977   58864 start.go:495] detecting cgroup driver to use...
	I0802 18:49:08.001051   58864 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:49:08.017468   58864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:49:08.032346   58864 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:49:08.032410   58864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:49:08.046401   58864 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:49:08.059888   58864 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:49:08.181784   58864 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:49:08.340793   58864 docker.go:233] disabling docker service ...
	I0802 18:49:08.340870   58864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:49:08.354209   58864 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:49:08.367157   58864 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:49:08.492287   58864 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:49:08.619229   58864 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:49:08.633845   58864 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:49:08.652543   58864 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 18:49:08.652608   58864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:08.662107   58864 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:49:08.662179   58864 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:08.671649   58864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:08.680806   58864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:08.689756   58864 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:49:08.699033   58864 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:08.708363   58864 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:08.723851   58864 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:08.733215   58864 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:49:08.741747   58864 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 18:49:08.741814   58864 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 18:49:08.756870   58864 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:49:08.767068   58864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:49:08.881278   58864 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:49:09.023662   58864 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:49:09.023769   58864 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:49:09.028468   58864 start.go:563] Will wait 60s for crictl version
	I0802 18:49:09.028527   58864 ssh_runner.go:195] Run: which crictl
	I0802 18:49:09.032085   58864 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:49:09.078794   58864 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:49:09.078876   58864 ssh_runner.go:195] Run: crio --version
	I0802 18:49:09.110107   58864 ssh_runner.go:195] Run: crio --version
	I0802 18:49:09.145745   58864 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 18:49:09.147316   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetIP
	I0802 18:49:09.150662   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:09.151180   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:09.151218   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:09.151495   58864 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0802 18:49:09.155439   58864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:49:09.169964   58864 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-504903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-504903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:49:09.170081   58864 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:49:09.170137   58864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:49:09.223395   58864 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 18:49:09.223491   58864 ssh_runner.go:195] Run: which lz4
	I0802 18:49:09.227232   58864 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 18:49:09.231189   58864 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 18:49:09.231215   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 18:49:07.702434   59196 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0802 18:49:07.702851   59196 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:49:07.702895   59196 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:49:07.722590   59196 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
	I0802 18:49:07.722976   59196 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:49:07.723598   59196 main.go:141] libmachine: Using API Version  1
	I0802 18:49:07.723624   59196 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:49:07.723956   59196 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:49:07.724230   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetMachineName
	I0802 18:49:07.724423   59196 main.go:141] libmachine: (newest-cni-198962) Calling .DriverName
	I0802 18:49:07.724570   59196 start.go:159] libmachine.API.Create for "newest-cni-198962" (driver="kvm2")
	I0802 18:49:07.724595   59196 client.go:168] LocalClient.Create starting
	I0802 18:49:07.724634   59196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 18:49:07.724666   59196 main.go:141] libmachine: Decoding PEM data...
	I0802 18:49:07.724680   59196 main.go:141] libmachine: Parsing certificate...
	I0802 18:49:07.724731   59196 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 18:49:07.724749   59196 main.go:141] libmachine: Decoding PEM data...
	I0802 18:49:07.724760   59196 main.go:141] libmachine: Parsing certificate...
	I0802 18:49:07.724776   59196 main.go:141] libmachine: Running pre-create checks...
	I0802 18:49:07.724785   59196 main.go:141] libmachine: (newest-cni-198962) Calling .PreCreateCheck
	I0802 18:49:07.725178   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetConfigRaw
	I0802 18:49:07.725596   59196 main.go:141] libmachine: Creating machine...
	I0802 18:49:07.725612   59196 main.go:141] libmachine: (newest-cni-198962) Calling .Create
	I0802 18:49:07.725763   59196 main.go:141] libmachine: (newest-cni-198962) Creating KVM machine...
	I0802 18:49:07.727009   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found existing default KVM network
	I0802 18:49:07.728317   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:07.728184   60231 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:fd:87} reservation:<nil>}
	I0802 18:49:07.729035   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:07.728954   60231 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:1e:45:63} reservation:<nil>}
	I0802 18:49:07.729748   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:07.729667   60231 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:81:4b} reservation:<nil>}
	I0802 18:49:07.730633   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:07.730566   60231 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002bb4e0}
	I0802 18:49:07.730685   59196 main.go:141] libmachine: (newest-cni-198962) DBG | created network xml: 
	I0802 18:49:07.730704   59196 main.go:141] libmachine: (newest-cni-198962) DBG | <network>
	I0802 18:49:07.730713   59196 main.go:141] libmachine: (newest-cni-198962) DBG |   <name>mk-newest-cni-198962</name>
	I0802 18:49:07.730722   59196 main.go:141] libmachine: (newest-cni-198962) DBG |   <dns enable='no'/>
	I0802 18:49:07.730727   59196 main.go:141] libmachine: (newest-cni-198962) DBG |   
	I0802 18:49:07.730734   59196 main.go:141] libmachine: (newest-cni-198962) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0802 18:49:07.730742   59196 main.go:141] libmachine: (newest-cni-198962) DBG |     <dhcp>
	I0802 18:49:07.730748   59196 main.go:141] libmachine: (newest-cni-198962) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0802 18:49:07.730754   59196 main.go:141] libmachine: (newest-cni-198962) DBG |     </dhcp>
	I0802 18:49:07.730764   59196 main.go:141] libmachine: (newest-cni-198962) DBG |   </ip>
	I0802 18:49:07.730783   59196 main.go:141] libmachine: (newest-cni-198962) DBG |   
	I0802 18:49:07.730792   59196 main.go:141] libmachine: (newest-cni-198962) DBG | </network>
	I0802 18:49:07.730820   59196 main.go:141] libmachine: (newest-cni-198962) DBG | 
	I0802 18:49:07.736071   59196 main.go:141] libmachine: (newest-cni-198962) DBG | trying to create private KVM network mk-newest-cni-198962 192.168.72.0/24...
	I0802 18:49:07.808531   59196 main.go:141] libmachine: (newest-cni-198962) DBG | private KVM network mk-newest-cni-198962 192.168.72.0/24 created
	I0802 18:49:07.808566   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:07.808482   60231 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:49:07.808581   59196 main.go:141] libmachine: (newest-cni-198962) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962 ...
	I0802 18:49:07.808598   59196 main.go:141] libmachine: (newest-cni-198962) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 18:49:07.808700   59196 main.go:141] libmachine: (newest-cni-198962) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 18:49:08.066936   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:08.066805   60231 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962/id_rsa...
	I0802 18:49:08.164765   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:08.164583   60231 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962/newest-cni-198962.rawdisk...
	I0802 18:49:08.164812   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Writing magic tar header
	I0802 18:49:08.164835   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Writing SSH key tar header
	I0802 18:49:08.164848   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:08.164771   60231 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962 ...
	I0802 18:49:08.165024   59196 main.go:141] libmachine: (newest-cni-198962) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962 (perms=drwx------)
	I0802 18:49:08.165058   59196 main.go:141] libmachine: (newest-cni-198962) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 18:49:08.165070   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962
	I0802 18:49:08.165089   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 18:49:08.165106   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:49:08.165124   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 18:49:08.165139   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 18:49:08.165151   59196 main.go:141] libmachine: (newest-cni-198962) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 18:49:08.165161   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Checking permissions on dir: /home/jenkins
	I0802 18:49:08.165180   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Checking permissions on dir: /home
	I0802 18:49:08.165193   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Skipping /home - not owner
	I0802 18:49:08.165209   59196 main.go:141] libmachine: (newest-cni-198962) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 18:49:08.165223   59196 main.go:141] libmachine: (newest-cni-198962) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 18:49:08.165235   59196 main.go:141] libmachine: (newest-cni-198962) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 18:49:08.165251   59196 main.go:141] libmachine: (newest-cni-198962) Creating domain...
	I0802 18:49:08.166500   59196 main.go:141] libmachine: (newest-cni-198962) define libvirt domain using xml: 
	I0802 18:49:08.166521   59196 main.go:141] libmachine: (newest-cni-198962) <domain type='kvm'>
	I0802 18:49:08.166532   59196 main.go:141] libmachine: (newest-cni-198962)   <name>newest-cni-198962</name>
	I0802 18:49:08.166540   59196 main.go:141] libmachine: (newest-cni-198962)   <memory unit='MiB'>2200</memory>
	I0802 18:49:08.166552   59196 main.go:141] libmachine: (newest-cni-198962)   <vcpu>2</vcpu>
	I0802 18:49:08.166561   59196 main.go:141] libmachine: (newest-cni-198962)   <features>
	I0802 18:49:08.166574   59196 main.go:141] libmachine: (newest-cni-198962)     <acpi/>
	I0802 18:49:08.166583   59196 main.go:141] libmachine: (newest-cni-198962)     <apic/>
	I0802 18:49:08.166591   59196 main.go:141] libmachine: (newest-cni-198962)     <pae/>
	I0802 18:49:08.166605   59196 main.go:141] libmachine: (newest-cni-198962)     
	I0802 18:49:08.166617   59196 main.go:141] libmachine: (newest-cni-198962)   </features>
	I0802 18:49:08.166627   59196 main.go:141] libmachine: (newest-cni-198962)   <cpu mode='host-passthrough'>
	I0802 18:49:08.166637   59196 main.go:141] libmachine: (newest-cni-198962)   
	I0802 18:49:08.166646   59196 main.go:141] libmachine: (newest-cni-198962)   </cpu>
	I0802 18:49:08.166656   59196 main.go:141] libmachine: (newest-cni-198962)   <os>
	I0802 18:49:08.166667   59196 main.go:141] libmachine: (newest-cni-198962)     <type>hvm</type>
	I0802 18:49:08.166679   59196 main.go:141] libmachine: (newest-cni-198962)     <boot dev='cdrom'/>
	I0802 18:49:08.166693   59196 main.go:141] libmachine: (newest-cni-198962)     <boot dev='hd'/>
	I0802 18:49:08.166727   59196 main.go:141] libmachine: (newest-cni-198962)     <bootmenu enable='no'/>
	I0802 18:49:08.166751   59196 main.go:141] libmachine: (newest-cni-198962)   </os>
	I0802 18:49:08.166764   59196 main.go:141] libmachine: (newest-cni-198962)   <devices>
	I0802 18:49:08.166776   59196 main.go:141] libmachine: (newest-cni-198962)     <disk type='file' device='cdrom'>
	I0802 18:49:08.166793   59196 main.go:141] libmachine: (newest-cni-198962)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962/boot2docker.iso'/>
	I0802 18:49:08.166804   59196 main.go:141] libmachine: (newest-cni-198962)       <target dev='hdc' bus='scsi'/>
	I0802 18:49:08.166815   59196 main.go:141] libmachine: (newest-cni-198962)       <readonly/>
	I0802 18:49:08.166825   59196 main.go:141] libmachine: (newest-cni-198962)     </disk>
	I0802 18:49:08.166855   59196 main.go:141] libmachine: (newest-cni-198962)     <disk type='file' device='disk'>
	I0802 18:49:08.166874   59196 main.go:141] libmachine: (newest-cni-198962)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 18:49:08.166886   59196 main.go:141] libmachine: (newest-cni-198962)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962/newest-cni-198962.rawdisk'/>
	I0802 18:49:08.166892   59196 main.go:141] libmachine: (newest-cni-198962)       <target dev='hda' bus='virtio'/>
	I0802 18:49:08.166900   59196 main.go:141] libmachine: (newest-cni-198962)     </disk>
	I0802 18:49:08.166909   59196 main.go:141] libmachine: (newest-cni-198962)     <interface type='network'>
	I0802 18:49:08.166919   59196 main.go:141] libmachine: (newest-cni-198962)       <source network='mk-newest-cni-198962'/>
	I0802 18:49:08.166927   59196 main.go:141] libmachine: (newest-cni-198962)       <model type='virtio'/>
	I0802 18:49:08.166935   59196 main.go:141] libmachine: (newest-cni-198962)     </interface>
	I0802 18:49:08.166943   59196 main.go:141] libmachine: (newest-cni-198962)     <interface type='network'>
	I0802 18:49:08.166953   59196 main.go:141] libmachine: (newest-cni-198962)       <source network='default'/>
	I0802 18:49:08.166962   59196 main.go:141] libmachine: (newest-cni-198962)       <model type='virtio'/>
	I0802 18:49:08.166984   59196 main.go:141] libmachine: (newest-cni-198962)     </interface>
	I0802 18:49:08.167006   59196 main.go:141] libmachine: (newest-cni-198962)     <serial type='pty'>
	I0802 18:49:08.167023   59196 main.go:141] libmachine: (newest-cni-198962)       <target port='0'/>
	I0802 18:49:08.167041   59196 main.go:141] libmachine: (newest-cni-198962)     </serial>
	I0802 18:49:08.167054   59196 main.go:141] libmachine: (newest-cni-198962)     <console type='pty'>
	I0802 18:49:08.167066   59196 main.go:141] libmachine: (newest-cni-198962)       <target type='serial' port='0'/>
	I0802 18:49:08.167077   59196 main.go:141] libmachine: (newest-cni-198962)     </console>
	I0802 18:49:08.167088   59196 main.go:141] libmachine: (newest-cni-198962)     <rng model='virtio'>
	I0802 18:49:08.167110   59196 main.go:141] libmachine: (newest-cni-198962)       <backend model='random'>/dev/random</backend>
	I0802 18:49:08.167122   59196 main.go:141] libmachine: (newest-cni-198962)     </rng>
	I0802 18:49:08.167134   59196 main.go:141] libmachine: (newest-cni-198962)     
	I0802 18:49:08.167141   59196 main.go:141] libmachine: (newest-cni-198962)     
	I0802 18:49:08.167152   59196 main.go:141] libmachine: (newest-cni-198962)   </devices>
	I0802 18:49:08.167162   59196 main.go:141] libmachine: (newest-cni-198962) </domain>
	I0802 18:49:08.167174   59196 main.go:141] libmachine: (newest-cni-198962) 
	I0802 18:49:08.174629   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:23:5d:29 in network default
	I0802 18:49:08.175287   59196 main.go:141] libmachine: (newest-cni-198962) Ensuring networks are active...
	I0802 18:49:08.175313   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:08.176215   59196 main.go:141] libmachine: (newest-cni-198962) Ensuring network default is active
	I0802 18:49:08.176645   59196 main.go:141] libmachine: (newest-cni-198962) Ensuring network mk-newest-cni-198962 is active
	I0802 18:49:08.177373   59196 main.go:141] libmachine: (newest-cni-198962) Getting domain xml...
	I0802 18:49:08.178197   59196 main.go:141] libmachine: (newest-cni-198962) Creating domain...
	I0802 18:49:09.449192   59196 main.go:141] libmachine: (newest-cni-198962) Waiting to get IP...
	I0802 18:49:09.450224   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:09.450703   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:09.450762   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:09.450695   60231 retry.go:31] will retry after 208.891767ms: waiting for machine to come up
	I0802 18:49:09.661199   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:09.661670   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:09.661699   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:09.661627   60231 retry.go:31] will retry after 333.710015ms: waiting for machine to come up
	I0802 18:49:09.997223   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:09.997679   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:09.997710   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:09.997629   60231 retry.go:31] will retry after 387.403704ms: waiting for machine to come up
	I0802 18:49:10.386367   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:10.386986   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:10.387017   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:10.386944   60231 retry.go:31] will retry after 571.613325ms: waiting for machine to come up
	I0802 18:49:06.769230   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:07.268885   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:07.769240   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:08.268946   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:08.768824   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:09.269232   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:09.769180   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:10.268960   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:10.768720   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:11.268345   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:10.494703   58864 crio.go:462] duration metric: took 1.267482597s to copy over tarball
	I0802 18:49:10.494814   58864 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 18:49:12.693191   58864 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.198340805s)
	I0802 18:49:12.693227   58864 crio.go:469] duration metric: took 2.198487073s to extract the tarball
	I0802 18:49:12.693236   58864 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 18:49:12.730195   58864 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:49:12.769467   58864 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:49:12.769484   58864 cache_images.go:84] Images are preloaded, skipping loading
	I0802 18:49:12.769492   58864 kubeadm.go:934] updating node { 192.168.61.183 8444 v1.30.3 crio true true} ...
	I0802 18:49:12.769608   58864 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-504903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-504903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:49:12.769667   58864 ssh_runner.go:195] Run: crio config
	I0802 18:49:12.819619   58864 cni.go:84] Creating CNI manager for ""
	I0802 18:49:12.819647   58864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:49:12.819664   58864 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:49:12.819691   58864 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.183 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-504903 NodeName:default-k8s-diff-port-504903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 18:49:12.819877   58864 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.183
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-504903"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.183
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.183"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:49:12.819942   58864 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 18:49:12.829429   58864 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:49:12.829479   58864 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:49:12.837960   58864 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0802 18:49:12.853527   58864 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:49:12.870982   58864 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0802 18:49:12.889043   58864 ssh_runner.go:195] Run: grep 192.168.61.183	control-plane.minikube.internal$ /etc/hosts
	I0802 18:49:12.892511   58864 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:49:12.904038   58864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:49:13.019280   58864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:49:13.035289   58864 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903 for IP: 192.168.61.183
	I0802 18:49:13.035311   58864 certs.go:194] generating shared ca certs ...
	I0802 18:49:13.035329   58864 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:49:13.035513   58864 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:49:13.035574   58864 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:49:13.035586   58864 certs.go:256] generating profile certs ...
	I0802 18:49:13.035689   58864 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/client.key
	I0802 18:49:13.035775   58864 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/apiserver.key.e4313fc6
	I0802 18:49:13.035872   58864 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/proxy-client.key
	I0802 18:49:13.036005   58864 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:49:13.036035   58864 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:49:13.036041   58864 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:49:13.036061   58864 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:49:13.036086   58864 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:49:13.036106   58864 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:49:13.036141   58864 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:49:13.036796   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:49:13.074509   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:49:13.111256   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:49:13.144711   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:49:13.193674   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0802 18:49:13.231946   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0802 18:49:13.257885   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:49:13.280735   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 18:49:13.303014   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:49:13.325040   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:49:13.346919   58864 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:49:13.368992   58864 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:49:13.387714   58864 ssh_runner.go:195] Run: openssl version
	I0802 18:49:13.393614   58864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:49:13.404183   58864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:49:13.408656   58864 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:49:13.408709   58864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:49:13.414453   58864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:49:13.424887   58864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:49:13.435258   58864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:49:13.439459   58864 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:49:13.439512   58864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:49:13.444644   58864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:49:13.454155   58864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:49:13.463844   58864 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:49:13.467919   58864 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:49:13.468002   58864 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:49:13.473209   58864 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:49:13.482749   58864 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:49:13.486839   58864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 18:49:13.492209   58864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 18:49:13.497488   58864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 18:49:13.502830   58864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 18:49:13.508004   58864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 18:49:13.513377   58864 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 18:49:13.518587   58864 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-504903 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-504903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:49:13.518701   58864 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:49:13.518739   58864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:49:13.552139   58864 cri.go:89] found id: ""
	I0802 18:49:13.552211   58864 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 18:49:13.561601   58864 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0802 18:49:13.561629   58864 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0802 18:49:13.561679   58864 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0802 18:49:13.570397   58864 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0802 18:49:13.571427   58864 kubeconfig.go:125] found "default-k8s-diff-port-504903" server: "https://192.168.61.183:8444"
	I0802 18:49:13.573550   58864 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0802 18:49:13.582146   58864 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.183
	I0802 18:49:13.582181   58864 kubeadm.go:1160] stopping kube-system containers ...
	I0802 18:49:13.582195   58864 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0802 18:49:13.582245   58864 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:49:13.619610   58864 cri.go:89] found id: ""
	I0802 18:49:13.619689   58864 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0802 18:49:13.634692   58864 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:49:13.643439   58864 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:49:13.643464   58864 kubeadm.go:157] found existing configuration files:
	
	I0802 18:49:13.643513   58864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0802 18:49:13.651816   58864 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:49:13.651868   58864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:49:13.660419   58864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0802 18:49:13.668695   58864 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:49:13.668776   58864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:49:13.677315   58864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0802 18:49:13.685370   58864 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:49:13.685417   58864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:49:13.693715   58864 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0802 18:49:13.701764   58864 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:49:13.701836   58864 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:49:13.710062   58864 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 18:49:13.718649   58864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:49:13.824334   58864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:49:10.959873   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:10.960538   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:10.960575   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:10.960471   60231 retry.go:31] will retry after 667.961457ms: waiting for machine to come up
	I0802 18:49:11.630527   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:11.631177   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:11.631227   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:11.631130   60231 retry.go:31] will retry after 747.234934ms: waiting for machine to come up
	I0802 18:49:12.380014   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:12.381186   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:12.381210   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:12.381136   60231 retry.go:31] will retry after 957.583816ms: waiting for machine to come up
	I0802 18:49:13.340597   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:13.341073   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:13.341102   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:13.341024   60231 retry.go:31] will retry after 1.134262347s: waiting for machine to come up
	I0802 18:49:14.476615   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:14.477111   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:14.477135   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:14.477061   60231 retry.go:31] will retry after 1.694227137s: waiting for machine to come up
	I0802 18:49:11.769141   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:12.268794   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:12.769269   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:13.268381   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:13.768918   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:14.268953   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:14.769249   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:15.268538   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:15.768893   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:16.269173   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:14.500999   58864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:49:14.714416   58864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:49:14.783609   58864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:49:14.850458   58864 api_server.go:52] waiting for apiserver process to appear ...
	I0802 18:49:14.850576   58864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:15.351630   58864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:15.851428   58864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:16.350902   58864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:16.851014   58864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:17.350844   58864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:17.366507   58864 api_server.go:72] duration metric: took 2.516048517s to wait for apiserver process to appear ...
	I0802 18:49:17.366539   58864 api_server.go:88] waiting for apiserver healthz status ...
	I0802 18:49:17.366572   58864 api_server.go:253] Checking apiserver healthz at https://192.168.61.183:8444/healthz ...
	I0802 18:49:19.651796   58864 api_server.go:279] https://192.168.61.183:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0802 18:49:19.651829   58864 api_server.go:103] status: https://192.168.61.183:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0802 18:49:19.651845   58864 api_server.go:253] Checking apiserver healthz at https://192.168.61.183:8444/healthz ...
	I0802 18:49:19.754440   58864 api_server.go:279] https://192.168.61.183:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 18:49:19.754469   58864 api_server.go:103] status: https://192.168.61.183:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 18:49:19.867699   58864 api_server.go:253] Checking apiserver healthz at https://192.168.61.183:8444/healthz ...
	I0802 18:49:19.874143   58864 api_server.go:279] https://192.168.61.183:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 18:49:19.874182   58864 api_server.go:103] status: https://192.168.61.183:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 18:49:20.367295   58864 api_server.go:253] Checking apiserver healthz at https://192.168.61.183:8444/healthz ...
	I0802 18:49:20.373353   58864 api_server.go:279] https://192.168.61.183:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 18:49:20.373388   58864 api_server.go:103] status: https://192.168.61.183:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 18:49:20.866890   58864 api_server.go:253] Checking apiserver healthz at https://192.168.61.183:8444/healthz ...
	I0802 18:49:20.871126   58864 api_server.go:279] https://192.168.61.183:8444/healthz returned 200:
	ok
	I0802 18:49:20.877434   58864 api_server.go:141] control plane version: v1.30.3
	I0802 18:49:20.877457   58864 api_server.go:131] duration metric: took 3.510912011s to wait for apiserver health ...
	I0802 18:49:20.877466   58864 cni.go:84] Creating CNI manager for ""
	I0802 18:49:20.877472   58864 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:49:20.879346   58864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 18:49:16.173319   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:16.173790   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:16.173814   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:16.173735   60231 retry.go:31] will retry after 1.719056305s: waiting for machine to come up
	I0802 18:49:17.894234   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:17.894664   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:17.894684   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:17.894640   60231 retry.go:31] will retry after 2.751645446s: waiting for machine to come up
	I0802 18:49:20.648297   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:20.648762   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:20.648812   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:20.648702   60231 retry.go:31] will retry after 3.121767081s: waiting for machine to come up
	I0802 18:49:16.769155   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:17.268386   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:17.768359   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:18.269292   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:18.768387   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:19.269201   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:19.768685   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:20.268340   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:20.769157   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:21.268288   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:20.880628   58864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 18:49:20.892220   58864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 18:49:20.911531   58864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 18:49:20.927317   58864 system_pods.go:59] 8 kube-system pods found
	I0802 18:49:20.927366   58864 system_pods.go:61] "coredns-7db6d8ff4d-k46j2" [3aedd5c3-6afd-4c1d-acec-e90822891130] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 18:49:20.927377   58864 system_pods.go:61] "etcd-default-k8s-diff-port-504903" [245136bf-bd88-410d-9aab-d58b8b0a489e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0802 18:49:20.927385   58864 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-504903" [0077a906-6bb1-43c4-aebf-f89f8f4cc757] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0802 18:49:20.927393   58864 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-504903" [3e91bdb3-1b46-413b-bdec-476626c2b73a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0802 18:49:20.927399   58864 system_pods.go:61] "kube-proxy-dfq8b" [230df431-7597-403f-a0db-88f4c99077c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0802 18:49:20.927404   58864 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-504903" [c9c982ed-0192-4dbb-9d23-3643ed088cdc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0802 18:49:20.927410   58864 system_pods.go:61] "metrics-server-569cc877fc-pw5tt" [35b4be07-d078-4cf8-80b9-15109421de2f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0802 18:49:20.927419   58864 system_pods.go:61] "storage-provisioner" [a7763010-83da-4af0-a923-9bf8f4508403] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0802 18:49:20.927428   58864 system_pods.go:74] duration metric: took 15.878601ms to wait for pod list to return data ...
	I0802 18:49:20.927436   58864 node_conditions.go:102] verifying NodePressure condition ...
	I0802 18:49:20.930313   58864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 18:49:20.930336   58864 node_conditions.go:123] node cpu capacity is 2
	I0802 18:49:20.930348   58864 node_conditions.go:105] duration metric: took 2.908449ms to run NodePressure ...
	I0802 18:49:20.930363   58864 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:49:21.220349   58864 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0802 18:49:21.224651   58864 kubeadm.go:739] kubelet initialised
	I0802 18:49:21.224669   58864 kubeadm.go:740] duration metric: took 4.284613ms waiting for restarted kubelet to initialise ...
	I0802 18:49:21.224683   58864 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 18:49:21.228973   58864 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-k46j2" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:21.233453   58864 pod_ready.go:97] node "default-k8s-diff-port-504903" hosting pod "coredns-7db6d8ff4d-k46j2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:21.233475   58864 pod_ready.go:81] duration metric: took 4.482951ms for pod "coredns-7db6d8ff4d-k46j2" in "kube-system" namespace to be "Ready" ...
	E0802 18:49:21.233483   58864 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-504903" hosting pod "coredns-7db6d8ff4d-k46j2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:21.233489   58864 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:21.237274   58864 pod_ready.go:97] node "default-k8s-diff-port-504903" hosting pod "etcd-default-k8s-diff-port-504903" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:21.237294   58864 pod_ready.go:81] duration metric: took 3.79717ms for pod "etcd-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	E0802 18:49:21.237304   58864 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-504903" hosting pod "etcd-default-k8s-diff-port-504903" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:21.237310   58864 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:21.240881   58864 pod_ready.go:97] node "default-k8s-diff-port-504903" hosting pod "kube-apiserver-default-k8s-diff-port-504903" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:21.240899   58864 pod_ready.go:81] duration metric: took 3.580247ms for pod "kube-apiserver-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	E0802 18:49:21.240907   58864 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-504903" hosting pod "kube-apiserver-default-k8s-diff-port-504903" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:21.240913   58864 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:21.317074   58864 pod_ready.go:97] node "default-k8s-diff-port-504903" hosting pod "kube-controller-manager-default-k8s-diff-port-504903" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:21.317105   58864 pod_ready.go:81] duration metric: took 76.180442ms for pod "kube-controller-manager-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	E0802 18:49:21.317118   58864 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-504903" hosting pod "kube-controller-manager-default-k8s-diff-port-504903" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:21.317127   58864 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dfq8b" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:21.714800   58864 pod_ready.go:97] node "default-k8s-diff-port-504903" hosting pod "kube-proxy-dfq8b" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:21.714827   58864 pod_ready.go:81] duration metric: took 397.692235ms for pod "kube-proxy-dfq8b" in "kube-system" namespace to be "Ready" ...
	E0802 18:49:21.714838   58864 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-504903" hosting pod "kube-proxy-dfq8b" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:21.714844   58864 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:22.115222   58864 pod_ready.go:97] node "default-k8s-diff-port-504903" hosting pod "kube-scheduler-default-k8s-diff-port-504903" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:22.115250   58864 pod_ready.go:81] duration metric: took 400.397743ms for pod "kube-scheduler-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	E0802 18:49:22.115261   58864 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-504903" hosting pod "kube-scheduler-default-k8s-diff-port-504903" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:22.115268   58864 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-pw5tt" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:22.514988   58864 pod_ready.go:97] node "default-k8s-diff-port-504903" hosting pod "metrics-server-569cc877fc-pw5tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:22.515015   58864 pod_ready.go:81] duration metric: took 399.739404ms for pod "metrics-server-569cc877fc-pw5tt" in "kube-system" namespace to be "Ready" ...
	E0802 18:49:22.515025   58864 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-504903" hosting pod "metrics-server-569cc877fc-pw5tt" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:22.515033   58864 pod_ready.go:38] duration metric: took 1.290342411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 18:49:22.515048   58864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 18:49:22.525930   58864 ops.go:34] apiserver oom_adj: -16
	I0802 18:49:22.525953   58864 kubeadm.go:597] duration metric: took 8.96431755s to restartPrimaryControlPlane
	I0802 18:49:22.525961   58864 kubeadm.go:394] duration metric: took 9.007379786s to StartCluster
	I0802 18:49:22.525976   58864 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:49:22.526057   58864 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:49:22.527071   58864 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:49:22.527348   58864 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.183 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 18:49:22.527462   58864 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 18:49:22.527560   58864 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-504903"
	I0802 18:49:22.527573   58864 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-504903"
	I0802 18:49:22.527584   58864 config.go:182] Loaded profile config "default-k8s-diff-port-504903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:49:22.527603   58864 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-504903"
	I0802 18:49:22.527603   58864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-504903"
	W0802 18:49:22.527612   58864 addons.go:243] addon storage-provisioner should already be in state true
	I0802 18:49:22.527621   58864 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-504903"
	I0802 18:49:22.527671   58864 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-504903"
	W0802 18:49:22.527685   58864 addons.go:243] addon metrics-server should already be in state true
	I0802 18:49:22.527711   58864 host.go:66] Checking if "default-k8s-diff-port-504903" exists ...
	I0802 18:49:22.527639   58864 host.go:66] Checking if "default-k8s-diff-port-504903" exists ...
	I0802 18:49:22.528040   58864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:49:22.528060   58864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:49:22.528075   58864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:49:22.528075   58864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:49:22.528087   58864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:49:22.528113   58864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:49:22.529295   58864 out.go:177] * Verifying Kubernetes components...
	I0802 18:49:22.530815   58864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:49:22.543720   58864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38981
	I0802 18:49:22.543720   58864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41015
	I0802 18:49:22.543719   58864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0802 18:49:22.544150   58864 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:49:22.544158   58864 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:49:22.544150   58864 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:49:22.544646   58864 main.go:141] libmachine: Using API Version  1
	I0802 18:49:22.544664   58864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:49:22.544709   58864 main.go:141] libmachine: Using API Version  1
	I0802 18:49:22.544726   58864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:49:22.544773   58864 main.go:141] libmachine: Using API Version  1
	I0802 18:49:22.544793   58864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:49:22.545087   58864 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:49:22.545092   58864 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:49:22.545098   58864 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:49:22.545320   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetState
	I0802 18:49:22.545644   58864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:49:22.545664   58864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:49:22.545692   58864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:49:22.545760   58864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:49:22.548510   58864 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-504903"
	W0802 18:49:22.548535   58864 addons.go:243] addon default-storageclass should already be in state true
	I0802 18:49:22.548572   58864 host.go:66] Checking if "default-k8s-diff-port-504903" exists ...
	I0802 18:49:22.548949   58864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:49:22.548999   58864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:49:22.560490   58864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I0802 18:49:22.560954   58864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46255
	I0802 18:49:22.560953   58864 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:49:22.561324   58864 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:49:22.561498   58864 main.go:141] libmachine: Using API Version  1
	I0802 18:49:22.561521   58864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:49:22.561857   58864 main.go:141] libmachine: Using API Version  1
	I0802 18:49:22.561873   58864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:49:22.561908   58864 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:49:22.562060   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetState
	I0802 18:49:22.562409   58864 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:49:22.562664   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetState
	I0802 18:49:22.563807   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:49:22.564304   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:49:22.566301   58864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:49:22.566364   58864 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0802 18:49:22.567551   58864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42029
	I0802 18:49:22.567704   58864 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 18:49:22.567720   58864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 18:49:22.567735   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:22.567931   58864 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:49:22.568360   58864 main.go:141] libmachine: Using API Version  1
	I0802 18:49:22.568382   58864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:49:22.568451   58864 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0802 18:49:22.568469   58864 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0802 18:49:22.568486   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:22.568722   58864 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:49:22.569272   58864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:49:22.569299   58864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:49:22.571480   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:22.572021   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:22.572051   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:22.572077   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:22.572129   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:22.572301   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:22.572418   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:22.572441   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:22.572688   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:22.572781   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:22.572827   58864 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/default-k8s-diff-port-504903/id_rsa Username:docker}
	I0802 18:49:22.572917   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:22.573035   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:22.573151   58864 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/default-k8s-diff-port-504903/id_rsa Username:docker}
	I0802 18:49:22.591424   58864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0802 18:49:22.591781   58864 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:49:22.592216   58864 main.go:141] libmachine: Using API Version  1
	I0802 18:49:22.592243   58864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:49:22.592590   58864 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:49:22.592767   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetState
	I0802 18:49:22.594268   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .DriverName
	I0802 18:49:22.594534   58864 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 18:49:22.594551   58864 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 18:49:22.594571   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHHostname
	I0802 18:49:22.597631   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:22.598038   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:0f:3b", ip: ""} in network mk-default-k8s-diff-port-504903: {Iface:virbr1 ExpiryTime:2024-08-02 19:48:58 +0000 UTC Type:0 Mac:52:54:00:83:0f:3b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:default-k8s-diff-port-504903 Clientid:01:52:54:00:83:0f:3b}
	I0802 18:49:22.598072   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | domain default-k8s-diff-port-504903 has defined IP address 192.168.61.183 and MAC address 52:54:00:83:0f:3b in network mk-default-k8s-diff-port-504903
	I0802 18:49:22.598153   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHPort
	I0802 18:49:22.598334   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHKeyPath
	I0802 18:49:22.598478   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .GetSSHUsername
	I0802 18:49:22.598614   58864 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/default-k8s-diff-port-504903/id_rsa Username:docker}
	I0802 18:49:22.700175   58864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:49:22.715595   58864 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-504903" to be "Ready" ...
	I0802 18:49:22.818253   58864 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0802 18:49:22.818275   58864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0802 18:49:22.819948   58864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 18:49:22.835249   58864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 18:49:22.852731   58864 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0802 18:49:22.852759   58864 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0802 18:49:22.880586   58864 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 18:49:22.880610   58864 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0802 18:49:22.926566   58864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 18:49:23.775161   58864 main.go:141] libmachine: Making call to close driver server
	I0802 18:49:23.775190   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .Close
	I0802 18:49:23.775197   58864 main.go:141] libmachine: Making call to close driver server
	I0802 18:49:23.775213   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .Close
	I0802 18:49:23.775469   58864 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:49:23.775484   58864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:49:23.775493   58864 main.go:141] libmachine: Making call to close driver server
	I0802 18:49:23.775501   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .Close
	I0802 18:49:23.775517   58864 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:49:23.775528   58864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:49:23.775537   58864 main.go:141] libmachine: Making call to close driver server
	I0802 18:49:23.775545   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .Close
	I0802 18:49:23.775711   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | Closing plugin on server side
	I0802 18:49:23.775744   58864 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:49:23.775752   58864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:49:23.775811   58864 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:49:23.775828   58864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:49:23.775862   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | Closing plugin on server side
	I0802 18:49:23.781255   58864 main.go:141] libmachine: Making call to close driver server
	I0802 18:49:23.781271   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .Close
	I0802 18:49:23.781501   58864 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:49:23.781515   58864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:49:23.789306   58864 main.go:141] libmachine: Making call to close driver server
	I0802 18:49:23.789328   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .Close
	I0802 18:49:23.789579   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) DBG | Closing plugin on server side
	I0802 18:49:23.789595   58864 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:49:23.789608   58864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:49:23.789626   58864 main.go:141] libmachine: Making call to close driver server
	I0802 18:49:23.789642   58864 main.go:141] libmachine: (default-k8s-diff-port-504903) Calling .Close
	I0802 18:49:23.789865   58864 main.go:141] libmachine: Successfully made call to close driver server
	I0802 18:49:23.789881   58864 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 18:49:23.789891   58864 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-504903"
	I0802 18:49:23.791763   58864 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0802 18:49:23.792969   58864 addons.go:510] duration metric: took 1.265508145s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0802 18:49:23.771787   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:23.772246   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:23.772274   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:23.772197   60231 retry.go:31] will retry after 2.938309786s: waiting for machine to come up
	I0802 18:49:21.768313   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:22.268845   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:22.769066   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:23.268672   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:23.768752   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:24.268335   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:24.768409   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:25.268773   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:25.768816   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:26.269062   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:24.718681   58864 node_ready.go:53] node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:26.719059   58864 node_ready.go:53] node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:28.719711   58864 node_ready.go:53] node "default-k8s-diff-port-504903" has status "Ready":"False"
	I0802 18:49:26.712141   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:26.712556   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find current IP address of domain newest-cni-198962 in network mk-newest-cni-198962
	I0802 18:49:26.712587   59196 main.go:141] libmachine: (newest-cni-198962) DBG | I0802 18:49:26.712512   60231 retry.go:31] will retry after 3.42807385s: waiting for machine to come up
	I0802 18:49:30.141697   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.142100   59196 main.go:141] libmachine: (newest-cni-198962) Found IP for machine: 192.168.72.48
	I0802 18:49:30.142128   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has current primary IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.142135   59196 main.go:141] libmachine: (newest-cni-198962) Reserving static IP address...
	I0802 18:49:30.142499   59196 main.go:141] libmachine: (newest-cni-198962) DBG | unable to find host DHCP lease matching {name: "newest-cni-198962", mac: "52:54:00:4f:40:55", ip: "192.168.72.48"} in network mk-newest-cni-198962
	I0802 18:49:30.218619   59196 main.go:141] libmachine: (newest-cni-198962) Reserved static IP address: 192.168.72.48
	I0802 18:49:30.218651   59196 main.go:141] libmachine: (newest-cni-198962) Waiting for SSH to be available...
	I0802 18:49:30.218661   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Getting to WaitForSSH function...
	I0802 18:49:30.221352   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.221941   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:30.221964   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.222236   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Using SSH client type: external
	I0802 18:49:30.222272   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962/id_rsa (-rw-------)
	I0802 18:49:30.222303   59196 main.go:141] libmachine: (newest-cni-198962) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.48 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 18:49:30.222316   59196 main.go:141] libmachine: (newest-cni-198962) DBG | About to run SSH command:
	I0802 18:49:30.222329   59196 main.go:141] libmachine: (newest-cni-198962) DBG | exit 0
	I0802 18:49:30.351396   59196 main.go:141] libmachine: (newest-cni-198962) DBG | SSH cmd err, output: <nil>: 
	I0802 18:49:30.351578   59196 main.go:141] libmachine: (newest-cni-198962) KVM machine creation complete!
	I0802 18:49:30.351884   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetConfigRaw
	I0802 18:49:30.352488   59196 main.go:141] libmachine: (newest-cni-198962) Calling .DriverName
	I0802 18:49:30.352656   59196 main.go:141] libmachine: (newest-cni-198962) Calling .DriverName
	I0802 18:49:30.352784   59196 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 18:49:30.352800   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetState
	I0802 18:49:30.354247   59196 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 18:49:30.354275   59196 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 18:49:30.354283   59196 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 18:49:30.354312   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHHostname
	I0802 18:49:30.356576   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.356902   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:30.356942   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.357059   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHPort
	I0802 18:49:30.357232   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:30.357384   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:30.357560   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHUsername
	I0802 18:49:30.357793   59196 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:30.358033   59196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0802 18:49:30.358050   59196 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 18:49:30.466560   59196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:49:30.466588   59196 main.go:141] libmachine: Detecting the provisioner...
	I0802 18:49:30.466599   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHHostname
	I0802 18:49:30.469422   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.469787   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:30.469814   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.469950   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHPort
	I0802 18:49:30.470159   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:30.470320   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:30.470437   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHUsername
	I0802 18:49:30.470604   59196 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:30.470771   59196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0802 18:49:30.470782   59196 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 18:49:30.587726   59196 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 18:49:30.587818   59196 main.go:141] libmachine: found compatible host: buildroot
	I0802 18:49:30.587828   59196 main.go:141] libmachine: Provisioning with buildroot...
	I0802 18:49:30.587834   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetMachineName
	I0802 18:49:30.588123   59196 buildroot.go:166] provisioning hostname "newest-cni-198962"
	I0802 18:49:30.588153   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetMachineName
	I0802 18:49:30.588300   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHHostname
	I0802 18:49:30.591182   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.591504   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:30.591535   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.591637   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHPort
	I0802 18:49:30.591800   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:30.591953   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:30.592095   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHUsername
	I0802 18:49:30.592290   59196 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:30.592456   59196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0802 18:49:30.592468   59196 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-198962 && echo "newest-cni-198962" | sudo tee /etc/hostname
	I0802 18:49:30.718433   59196 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-198962
	
	I0802 18:49:30.718466   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHHostname
	I0802 18:49:30.721778   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.722192   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:30.722226   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.722437   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHPort
	I0802 18:49:30.722630   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:30.722798   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:30.722939   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHUsername
	I0802 18:49:30.723156   59196 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:30.723454   59196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0802 18:49:30.723487   59196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-198962' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-198962/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-198962' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:49:30.844190   59196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:49:30.844221   59196 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:49:30.844247   59196 buildroot.go:174] setting up certificates
	I0802 18:49:30.844261   59196 provision.go:84] configureAuth start
	I0802 18:49:30.844276   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetMachineName
	I0802 18:49:30.844569   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetIP
	I0802 18:49:30.847806   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.848174   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:30.848203   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.848363   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHHostname
	I0802 18:49:30.850772   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.851179   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:30.851204   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:30.851375   59196 provision.go:143] copyHostCerts
	I0802 18:49:30.851424   59196 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:49:30.851434   59196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:49:30.851496   59196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:49:30.851622   59196 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:49:30.851633   59196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:49:30.851665   59196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:49:30.851740   59196 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:49:30.851747   59196 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:49:30.851766   59196 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:49:30.851825   59196 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.newest-cni-198962 san=[127.0.0.1 192.168.72.48 localhost minikube newest-cni-198962]
	I0802 18:49:26.768485   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:27.269191   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:27.769035   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:28.268999   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:28.768580   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:29.268534   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:29.768543   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:30.268550   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:30.768427   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:31.268562   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:31.740101   58307 start.go:364] duration metric: took 57.465618083s to acquireMachinesLock for "no-preload-407306"
	I0802 18:49:31.740149   58307 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:49:31.740160   58307 fix.go:54] fixHost starting: 
	I0802 18:49:31.740575   58307 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:49:31.740609   58307 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:49:31.759608   58307 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42561
	I0802 18:49:31.760082   58307 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:49:31.760653   58307 main.go:141] libmachine: Using API Version  1
	I0802 18:49:31.760675   58307 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:49:31.761037   58307 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:49:31.761211   58307 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	I0802 18:49:31.761366   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetState
	I0802 18:49:31.763383   58307 fix.go:112] recreateIfNeeded on no-preload-407306: state=Stopped err=<nil>
	I0802 18:49:31.763413   58307 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	W0802 18:49:31.763564   58307 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:49:31.765158   58307 out.go:177] * Restarting existing kvm2 VM for "no-preload-407306" ...
	I0802 18:49:31.065553   59196 provision.go:177] copyRemoteCerts
	I0802 18:49:31.065606   59196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:49:31.065629   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHHostname
	I0802 18:49:31.068454   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.068795   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:31.068821   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.069025   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHPort
	I0802 18:49:31.069269   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:31.069437   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHUsername
	I0802 18:49:31.069612   59196 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962/id_rsa Username:docker}
	I0802 18:49:31.153211   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 18:49:31.178008   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:49:31.201120   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0802 18:49:31.223664   59196 provision.go:87] duration metric: took 379.387992ms to configureAuth
	I0802 18:49:31.223698   59196 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:49:31.223893   59196 config.go:182] Loaded profile config "newest-cni-198962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 18:49:31.223980   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHHostname
	I0802 18:49:31.226897   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.227377   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:31.227410   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.227597   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHPort
	I0802 18:49:31.227798   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:31.227981   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:31.228154   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHUsername
	I0802 18:49:31.228320   59196 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:31.228540   59196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0802 18:49:31.228563   59196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:49:31.491386   59196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:49:31.491415   59196 main.go:141] libmachine: Checking connection to Docker...
	I0802 18:49:31.491424   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetURL
	I0802 18:49:31.492789   59196 main.go:141] libmachine: (newest-cni-198962) DBG | Using libvirt version 6000000
	I0802 18:49:31.495222   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.495666   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:31.495699   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.495847   59196 main.go:141] libmachine: Docker is up and running!
	I0802 18:49:31.495864   59196 main.go:141] libmachine: Reticulating splines...
	I0802 18:49:31.495870   59196 client.go:171] duration metric: took 23.771267159s to LocalClient.Create
	I0802 18:49:31.495894   59196 start.go:167] duration metric: took 23.771323617s to libmachine.API.Create "newest-cni-198962"
	I0802 18:49:31.495904   59196 start.go:293] postStartSetup for "newest-cni-198962" (driver="kvm2")
	I0802 18:49:31.495916   59196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:49:31.495931   59196 main.go:141] libmachine: (newest-cni-198962) Calling .DriverName
	I0802 18:49:31.496148   59196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:49:31.496182   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHHostname
	I0802 18:49:31.498569   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.498935   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:31.498962   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.499133   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHPort
	I0802 18:49:31.499284   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:31.499457   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHUsername
	I0802 18:49:31.499601   59196 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962/id_rsa Username:docker}
	I0802 18:49:31.585272   59196 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:49:31.589397   59196 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:49:31.589424   59196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:49:31.589503   59196 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:49:31.589606   59196 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:49:31.589707   59196 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:49:31.599086   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:49:31.621205   59196 start.go:296] duration metric: took 125.287907ms for postStartSetup
	I0802 18:49:31.621263   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetConfigRaw
	I0802 18:49:31.621876   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetIP
	I0802 18:49:31.624837   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.625289   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:31.625321   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.625687   59196 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/config.json ...
	I0802 18:49:31.625871   59196 start.go:128] duration metric: took 23.925594461s to createHost
	I0802 18:49:31.625893   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHHostname
	I0802 18:49:31.628295   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.628804   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:31.628834   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.628952   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHPort
	I0802 18:49:31.629136   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:31.629293   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:31.629418   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHUsername
	I0802 18:49:31.629540   59196 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:31.629745   59196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.48 22 <nil> <nil>}
	I0802 18:49:31.629757   59196 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 18:49:31.739959   59196 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722624571.716323520
	
	I0802 18:49:31.739982   59196 fix.go:216] guest clock: 1722624571.716323520
	I0802 18:49:31.739991   59196 fix.go:229] Guest: 2024-08-02 18:49:31.71632352 +0000 UTC Remote: 2024-08-02 18:49:31.625881599 +0000 UTC m=+265.758657445 (delta=90.441921ms)
	I0802 18:49:31.740014   59196 fix.go:200] guest clock delta is within tolerance: 90.441921ms
	I0802 18:49:31.740021   59196 start.go:83] releasing machines lock for "newest-cni-198962", held for 24.040288431s
	I0802 18:49:31.740052   59196 main.go:141] libmachine: (newest-cni-198962) Calling .DriverName
	I0802 18:49:31.740329   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetIP
	I0802 18:49:31.743282   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.743816   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:31.743861   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.744069   59196 main.go:141] libmachine: (newest-cni-198962) Calling .DriverName
	I0802 18:49:31.744796   59196 main.go:141] libmachine: (newest-cni-198962) Calling .DriverName
	I0802 18:49:31.745002   59196 main.go:141] libmachine: (newest-cni-198962) Calling .DriverName
	I0802 18:49:31.745206   59196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:49:31.745252   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHHostname
	I0802 18:49:31.745329   59196 ssh_runner.go:195] Run: cat /version.json
	I0802 18:49:31.745344   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHHostname
	I0802 18:49:31.748986   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.749249   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.749458   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:31.749477   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.749744   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHPort
	I0802 18:49:31.749798   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:31.749823   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:31.749925   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:31.750099   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHUsername
	I0802 18:49:31.750190   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHPort
	I0802 18:49:31.751063   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHKeyPath
	I0802 18:49:31.751075   59196 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962/id_rsa Username:docker}
	I0802 18:49:31.751249   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetSSHUsername
	I0802 18:49:31.751416   59196 sshutil.go:53] new ssh client: &{IP:192.168.72.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/newest-cni-198962/id_rsa Username:docker}
	I0802 18:49:31.836899   59196 ssh_runner.go:195] Run: systemctl --version
	I0802 18:49:31.870709   59196 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:49:32.032637   59196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:49:32.038487   59196 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:49:32.038547   59196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:49:32.058460   59196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 18:49:32.058488   59196 start.go:495] detecting cgroup driver to use...
	I0802 18:49:32.058553   59196 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:49:32.077428   59196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:49:32.091533   59196 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:49:32.091606   59196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:49:32.105366   59196 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:49:32.119693   59196 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:49:32.247668   59196 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:49:32.430908   59196 docker.go:233] disabling docker service ...
	I0802 18:49:32.430981   59196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:49:32.449848   59196 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:49:32.467464   59196 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:49:32.607559   59196 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:49:32.745129   59196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:49:32.763277   59196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:49:32.784677   59196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0802 18:49:33.068832   59196 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0802 18:49:33.068913   59196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:33.079869   59196 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:49:33.079946   59196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:33.090285   59196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:33.101178   59196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:33.111564   59196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:49:33.122680   59196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:33.132830   59196 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:33.149098   59196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:49:33.158755   59196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:49:33.168540   59196 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 18:49:33.168604   59196 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 18:49:33.182205   59196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:49:33.193954   59196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:49:33.318986   59196 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:49:33.455126   59196 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:49:33.455223   59196 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:49:33.459931   59196 start.go:563] Will wait 60s for crictl version
	I0802 18:49:33.459983   59196 ssh_runner.go:195] Run: which crictl
	I0802 18:49:33.464788   59196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:49:33.512161   59196 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:49:33.512258   59196 ssh_runner.go:195] Run: crio --version
	I0802 18:49:33.541339   59196 ssh_runner.go:195] Run: crio --version
	I0802 18:49:33.572303   59196 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0802 18:49:33.573507   59196 main.go:141] libmachine: (newest-cni-198962) Calling .GetIP
	I0802 18:49:33.576690   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:33.577058   59196 main.go:141] libmachine: (newest-cni-198962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:40:55", ip: ""} in network mk-newest-cni-198962: {Iface:virbr4 ExpiryTime:2024-08-02 19:49:21 +0000 UTC Type:0 Mac:52:54:00:4f:40:55 Iaid: IPaddr:192.168.72.48 Prefix:24 Hostname:newest-cni-198962 Clientid:01:52:54:00:4f:40:55}
	I0802 18:49:33.577089   59196 main.go:141] libmachine: (newest-cni-198962) DBG | domain newest-cni-198962 has defined IP address 192.168.72.48 and MAC address 52:54:00:4f:40:55 in network mk-newest-cni-198962
	I0802 18:49:33.577303   59196 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0802 18:49:33.581487   59196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:49:33.595186   59196 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0802 18:49:30.720051   58864 node_ready.go:49] node "default-k8s-diff-port-504903" has status "Ready":"True"
	I0802 18:49:30.720076   58864 node_ready.go:38] duration metric: took 8.004453315s for node "default-k8s-diff-port-504903" to be "Ready" ...
	I0802 18:49:30.720088   58864 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 18:49:30.726139   58864 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-k46j2" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:30.735182   58864 pod_ready.go:92] pod "coredns-7db6d8ff4d-k46j2" in "kube-system" namespace has status "Ready":"True"
	I0802 18:49:30.735207   58864 pod_ready.go:81] duration metric: took 9.042003ms for pod "coredns-7db6d8ff4d-k46j2" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:30.735220   58864 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:30.739456   58864 pod_ready.go:92] pod "etcd-default-k8s-diff-port-504903" in "kube-system" namespace has status "Ready":"True"
	I0802 18:49:30.739484   58864 pod_ready.go:81] duration metric: took 4.25634ms for pod "etcd-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:30.739496   58864 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:32.263718   58864 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-504903" in "kube-system" namespace has status "Ready":"True"
	I0802 18:49:32.263744   58864 pod_ready.go:81] duration metric: took 1.524239125s for pod "kube-apiserver-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:32.263758   58864 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:34.272363   58864 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-504903" in "kube-system" namespace has status "Ready":"False"
	I0802 18:49:33.596570   59196 kubeadm.go:883] updating cluster {Name:newest-cni-198962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-rc.0 ClusterName:newest-cni-198962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:49:33.596768   59196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0802 18:49:33.876084   59196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0802 18:49:34.162696   59196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0802 18:49:34.420640   59196 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0802 18:49:34.420782   59196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0802 18:49:34.688913   59196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0802 18:49:34.968150   59196 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0802 18:49:35.244060   59196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:49:35.278698   59196 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-rc.0". assuming images are not preloaded.
	I0802 18:49:35.278782   59196 ssh_runner.go:195] Run: which lz4
	I0802 18:49:35.283749   59196 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 18:49:35.288210   59196 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 18:49:35.288246   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389126804 bytes)
	I0802 18:49:31.768936   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:32.268934   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:32.769268   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:33.268701   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:33.768714   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:34.268342   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:34.769189   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:35.268618   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:35.769096   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:36.269207   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:31.766602   58307 main.go:141] libmachine: (no-preload-407306) Calling .Start
	I0802 18:49:31.770847   58307 main.go:141] libmachine: (no-preload-407306) Ensuring networks are active...
	I0802 18:49:31.771779   58307 main.go:141] libmachine: (no-preload-407306) Ensuring network default is active
	I0802 18:49:31.772235   58307 main.go:141] libmachine: (no-preload-407306) Ensuring network mk-no-preload-407306 is active
	I0802 18:49:31.772749   58307 main.go:141] libmachine: (no-preload-407306) Getting domain xml...
	I0802 18:49:31.773652   58307 main.go:141] libmachine: (no-preload-407306) Creating domain...
	I0802 18:49:33.116487   58307 main.go:141] libmachine: (no-preload-407306) Waiting to get IP...
	I0802 18:49:33.117278   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:33.117861   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:33.117917   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:33.117824   60523 retry.go:31] will retry after 299.393277ms: waiting for machine to come up
	I0802 18:49:33.419381   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:33.419900   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:33.419939   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:33.419867   60523 retry.go:31] will retry after 336.579779ms: waiting for machine to come up
	I0802 18:49:33.758538   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:33.758983   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:33.759017   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:33.758930   60523 retry.go:31] will retry after 381.841162ms: waiting for machine to come up
	I0802 18:49:34.142424   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:34.142991   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:34.143024   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:34.142948   60523 retry.go:31] will retry after 595.515127ms: waiting for machine to come up
	I0802 18:49:34.739739   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:34.740253   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:34.740285   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:34.740179   60523 retry.go:31] will retry after 645.87755ms: waiting for machine to come up
	I0802 18:49:35.388031   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:35.388494   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:35.388522   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:35.388460   60523 retry.go:31] will retry after 779.258683ms: waiting for machine to come up
	I0802 18:49:36.169313   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:36.169980   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:36.170008   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:36.169938   60523 retry.go:31] will retry after 786.851499ms: waiting for machine to come up
	I0802 18:49:34.771846   58864 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-504903" in "kube-system" namespace has status "Ready":"True"
	I0802 18:49:34.771874   58864 pod_ready.go:81] duration metric: took 2.508105374s for pod "kube-controller-manager-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:34.771886   58864 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dfq8b" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:34.777184   58864 pod_ready.go:92] pod "kube-proxy-dfq8b" in "kube-system" namespace has status "Ready":"True"
	I0802 18:49:34.777210   58864 pod_ready.go:81] duration metric: took 5.315471ms for pod "kube-proxy-dfq8b" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:34.777222   58864 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:34.781664   58864 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-504903" in "kube-system" namespace has status "Ready":"True"
	I0802 18:49:34.781687   58864 pod_ready.go:81] duration metric: took 4.457536ms for pod "kube-scheduler-default-k8s-diff-port-504903" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:34.781698   58864 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-pw5tt" in "kube-system" namespace to be "Ready" ...
	I0802 18:49:36.789356   58864 pod_ready.go:102] pod "metrics-server-569cc877fc-pw5tt" in "kube-system" namespace has status "Ready":"False"
	I0802 18:49:39.288228   58864 pod_ready.go:102] pod "metrics-server-569cc877fc-pw5tt" in "kube-system" namespace has status "Ready":"False"
	I0802 18:49:36.542259   59196 crio.go:462] duration metric: took 1.258536067s to copy over tarball
	I0802 18:49:36.542336   59196 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 18:49:38.643785   59196 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.101416311s)
	I0802 18:49:38.643815   59196 crio.go:469] duration metric: took 2.101529114s to extract the tarball
	I0802 18:49:38.643824   59196 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 18:49:38.683256   59196 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:49:38.726585   59196 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 18:49:38.726615   59196 cache_images.go:84] Images are preloaded, skipping loading
	I0802 18:49:38.726625   59196 kubeadm.go:934] updating node { 192.168.72.48 8443 v1.31.0-rc.0 crio true true} ...
	I0802 18:49:38.726764   59196 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-198962 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-198962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:49:38.726848   59196 ssh_runner.go:195] Run: crio config
	I0802 18:49:38.769611   59196 cni.go:84] Creating CNI manager for ""
	I0802 18:49:38.769638   59196 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:49:38.769655   59196 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0802 18:49:38.769687   59196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.48 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-198962 NodeName:newest-cni-198962 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.72.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 18:49:38.769896   59196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-198962"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:49:38.769977   59196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0802 18:49:38.782653   59196 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:49:38.782721   59196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:49:38.794735   59196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0802 18:49:38.812208   59196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0802 18:49:38.828610   59196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0802 18:49:38.845257   59196 ssh_runner.go:195] Run: grep 192.168.72.48	control-plane.minikube.internal$ /etc/hosts
	I0802 18:49:38.849025   59196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.48	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:49:38.864344   59196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:49:39.013362   59196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:49:39.045496   59196 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962 for IP: 192.168.72.48
	I0802 18:49:39.045521   59196 certs.go:194] generating shared ca certs ...
	I0802 18:49:39.045548   59196 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:49:39.045704   59196 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:49:39.045783   59196 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:49:39.045797   59196 certs.go:256] generating profile certs ...
	I0802 18:49:39.045863   59196 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/client.key
	I0802 18:49:39.045889   59196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/client.crt with IP's: []
	I0802 18:49:39.110212   59196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/client.crt ...
	I0802 18:49:39.110249   59196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/client.crt: {Name:mke89d7436ceb50b88400ab7582272f6d4244d65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:49:39.110462   59196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/client.key ...
	I0802 18:49:39.110475   59196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/client.key: {Name:mk17c7855edb97705c19eca63d06beb130be6cd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:49:39.110589   59196 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.key.85807bb5
	I0802 18:49:39.110606   59196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.crt.85807bb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.48]
	I0802 18:49:39.216286   59196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.crt.85807bb5 ...
	I0802 18:49:39.216318   59196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.crt.85807bb5: {Name:mk8de8a271ea1f4a7a44057d744527100bf035d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:49:39.216480   59196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.key.85807bb5 ...
	I0802 18:49:39.216493   59196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.key.85807bb5: {Name:mka8a792e08938c4129988efc20dea9c3a58955b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:49:39.216586   59196 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.crt.85807bb5 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.crt
	I0802 18:49:39.216710   59196 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.key.85807bb5 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.key
	I0802 18:49:39.216797   59196 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/proxy-client.key
	I0802 18:49:39.216816   59196 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/proxy-client.crt with IP's: []
	I0802 18:49:39.476944   59196 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/proxy-client.crt ...
	I0802 18:49:39.476977   59196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/proxy-client.crt: {Name:mkb801f3590e98cb559f34d0e80fd8c17329827c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:49:39.477159   59196 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/proxy-client.key ...
	I0802 18:49:39.477177   59196 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/proxy-client.key: {Name:mkdd88e3707110b701b79f44e17ea88c9c211549 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:49:39.477371   59196 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:49:39.477411   59196 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:49:39.477421   59196 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:49:39.477442   59196 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:49:39.477464   59196 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:49:39.477485   59196 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:49:39.477522   59196 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:49:39.478086   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:49:39.510370   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:49:39.534428   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:49:39.557117   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:49:39.580263   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0802 18:49:39.602634   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 18:49:39.625166   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:49:39.648012   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/newest-cni-198962/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 18:49:39.672751   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:49:39.696995   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:49:39.719645   59196 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:49:39.742160   59196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:49:39.758841   59196 ssh_runner.go:195] Run: openssl version
	I0802 18:49:39.764505   59196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:49:39.775720   59196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:49:39.782410   59196 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:49:39.782472   59196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:49:39.789306   59196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:49:39.803437   59196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:49:39.819489   59196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:49:39.827759   59196 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:49:39.827829   59196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:49:39.836785   59196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:49:39.853195   59196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:49:39.864602   59196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:49:39.871605   59196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:49:39.871682   59196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:49:39.881315   59196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:49:39.892576   59196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:49:39.897132   59196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 18:49:39.897210   59196 kubeadm.go:392] StartCluster: {Name:newest-cni-198962 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-rc.0 ClusterName:newest-cni-198962 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.48 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:49:39.897329   59196 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:49:39.897405   59196 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:49:39.938056   59196 cri.go:89] found id: ""
	I0802 18:49:39.938138   59196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 18:49:39.947926   59196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 18:49:39.957979   59196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:49:39.970739   59196 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:49:39.970767   59196 kubeadm.go:157] found existing configuration files:
	
	I0802 18:49:39.970833   59196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:49:39.983092   59196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:49:39.983165   59196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:49:39.993801   59196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:49:40.004400   59196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:49:40.004468   59196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:49:40.014427   59196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:49:40.023924   59196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:49:40.023995   59196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:49:40.033945   59196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:49:40.043081   59196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:49:40.043171   59196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:49:40.052506   59196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-rc.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 18:49:40.162409   59196 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-rc.0
	I0802 18:49:40.162542   59196 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:49:40.268631   59196 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:49:40.268755   59196 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:49:40.268864   59196 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0802 18:49:40.285978   59196 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:49:40.310545   59196 out.go:204]   - Generating certificates and keys ...
	I0802 18:49:40.310682   59196 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:49:40.310785   59196 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:49:40.727898   59196 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 18:49:40.813937   59196 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 18:49:36.768436   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:37.269059   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:37.769310   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:38.268396   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:38.768735   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:39.269062   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:39.769010   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:40.268815   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:40.768398   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:41.268785   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:36.958309   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:36.958802   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:36.958826   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:36.958763   60523 retry.go:31] will retry after 1.182844308s: waiting for machine to come up
	I0802 18:49:38.143070   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:38.143657   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:38.143691   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:38.143591   60523 retry.go:31] will retry after 1.210856616s: waiting for machine to come up
	I0802 18:49:39.356008   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:39.356449   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:39.356478   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:39.356411   60523 retry.go:31] will retry after 2.076557718s: waiting for machine to come up
	I0802 18:49:41.435125   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:41.435669   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:41.435701   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:41.435606   60523 retry.go:31] will retry after 2.608166994s: waiting for machine to come up
	I0802 18:49:40.971831   59196 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 18:49:41.066801   59196 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 18:49:41.341189   59196 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 18:49:41.341403   59196 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-198962] and IPs [192.168.72.48 127.0.0.1 ::1]
	I0802 18:49:41.481962   59196 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 18:49:41.482194   59196 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-198962] and IPs [192.168.72.48 127.0.0.1 ::1]
	I0802 18:49:41.639358   59196 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 18:49:41.802907   59196 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 18:49:42.004290   59196 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 18:49:42.004383   59196 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:49:42.065964   59196 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:49:42.184376   59196 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 18:49:42.320803   59196 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:49:42.507812   59196 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:49:42.607678   59196 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:49:42.608252   59196 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:49:42.616258   59196 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:49:41.289070   58864 pod_ready.go:102] pod "metrics-server-569cc877fc-pw5tt" in "kube-system" namespace has status "Ready":"False"
	I0802 18:49:43.297231   58864 pod_ready.go:102] pod "metrics-server-569cc877fc-pw5tt" in "kube-system" namespace has status "Ready":"False"
	I0802 18:49:42.682385   59196 out.go:204]   - Booting up control plane ...
	I0802 18:49:42.682543   59196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:49:42.682656   59196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:49:42.682749   59196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:49:42.682922   59196 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:49:42.683053   59196 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:49:42.683141   59196 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:49:42.802370   59196 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 18:49:42.802574   59196 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0802 18:49:43.304441   59196 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.40432ms
	I0802 18:49:43.304632   59196 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 18:49:41.768380   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:42.268246   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:42.769151   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:43.269202   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:43.768417   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:44.268594   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:44.768407   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:45.269136   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:45.768811   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:46.268264   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:44.045442   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:44.045840   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:44.045867   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:44.045792   60523 retry.go:31] will retry after 2.597008412s: waiting for machine to come up
	I0802 18:49:46.644314   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:46.644702   58307 main.go:141] libmachine: (no-preload-407306) DBG | unable to find current IP address of domain no-preload-407306 in network mk-no-preload-407306
	I0802 18:49:46.644727   58307 main.go:141] libmachine: (no-preload-407306) DBG | I0802 18:49:46.644661   60523 retry.go:31] will retry after 3.905375169s: waiting for machine to come up
	I0802 18:49:48.306130   59196 kubeadm.go:310] [api-check] The API server is healthy after 5.002327706s
	I0802 18:49:48.318630   59196 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 18:49:48.351359   59196 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 18:49:48.382591   59196 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 18:49:48.382846   59196 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-198962 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 18:49:48.398636   59196 kubeadm.go:310] [bootstrap-token] Using token: 80co3m.x0nox01g6wp5dk7b
	I0802 18:49:45.787517   58864 pod_ready.go:102] pod "metrics-server-569cc877fc-pw5tt" in "kube-system" namespace has status "Ready":"False"
	I0802 18:49:47.789564   58864 pod_ready.go:102] pod "metrics-server-569cc877fc-pw5tt" in "kube-system" namespace has status "Ready":"False"
	I0802 18:49:48.400100   59196 out.go:204]   - Configuring RBAC rules ...
	I0802 18:49:48.400260   59196 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 18:49:48.411606   59196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 18:49:48.423786   59196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 18:49:48.430133   59196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 18:49:48.439086   59196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 18:49:48.443758   59196 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 18:49:48.714263   59196 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 18:49:49.186600   59196 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 18:49:49.714369   59196 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 18:49:49.715394   59196 kubeadm.go:310] 
	I0802 18:49:49.715506   59196 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 18:49:49.715517   59196 kubeadm.go:310] 
	I0802 18:49:49.715620   59196 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 18:49:49.715639   59196 kubeadm.go:310] 
	I0802 18:49:49.715671   59196 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 18:49:49.715765   59196 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 18:49:49.715835   59196 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 18:49:49.715845   59196 kubeadm.go:310] 
	I0802 18:49:49.715934   59196 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 18:49:49.715954   59196 kubeadm.go:310] 
	I0802 18:49:49.716012   59196 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 18:49:49.716021   59196 kubeadm.go:310] 
	I0802 18:49:49.716088   59196 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 18:49:49.716206   59196 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 18:49:49.716313   59196 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 18:49:49.716328   59196 kubeadm.go:310] 
	I0802 18:49:49.716433   59196 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 18:49:49.716557   59196 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 18:49:49.716581   59196 kubeadm.go:310] 
	I0802 18:49:49.716699   59196 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 80co3m.x0nox01g6wp5dk7b \
	I0802 18:49:49.716830   59196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 18:49:49.716861   59196 kubeadm.go:310] 	--control-plane 
	I0802 18:49:49.716872   59196 kubeadm.go:310] 
	I0802 18:49:49.716997   59196 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 18:49:49.717012   59196 kubeadm.go:310] 
	I0802 18:49:49.717128   59196 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 80co3m.x0nox01g6wp5dk7b \
	I0802 18:49:49.717281   59196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 18:49:49.718307   59196 kubeadm.go:310] W0802 18:49:40.143941     851 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0802 18:49:49.718644   59196 kubeadm.go:310] W0802 18:49:40.144907     851 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0802 18:49:49.718798   59196 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 18:49:49.718823   59196 cni.go:84] Creating CNI manager for ""
	I0802 18:49:49.718833   59196 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:49:49.720578   59196 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 18:49:49.721842   59196 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 18:49:49.734119   59196 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 18:49:49.755962   59196 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 18:49:49.756044   59196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 18:49:49.756088   59196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-198962 minikube.k8s.io/updated_at=2024_08_02T18_49_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=newest-cni-198962 minikube.k8s.io/primary=true
	I0802 18:49:49.793475   59196 ops.go:34] apiserver oom_adj: -16
	I0802 18:49:49.962853   59196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 18:49:50.463030   59196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 18:49:46.768999   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:47.268792   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:47.768553   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:48.268258   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:48.768480   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:49.268353   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:49.768558   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:50.268832   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:50.768778   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:51.268701   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:50.552843   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.553201   58307 main.go:141] libmachine: (no-preload-407306) Found IP for machine: 192.168.39.168
	I0802 18:49:50.553221   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has current primary IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.553246   58307 main.go:141] libmachine: (no-preload-407306) Reserving static IP address...
	I0802 18:49:50.553676   58307 main.go:141] libmachine: (no-preload-407306) Reserved static IP address: 192.168.39.168
	I0802 18:49:50.553697   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "no-preload-407306", mac: "52:54:00:bd:56:69", ip: "192.168.39.168"} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.553704   58307 main.go:141] libmachine: (no-preload-407306) Waiting for SSH to be available...
	I0802 18:49:50.553723   58307 main.go:141] libmachine: (no-preload-407306) DBG | skip adding static IP to network mk-no-preload-407306 - found existing host DHCP lease matching {name: "no-preload-407306", mac: "52:54:00:bd:56:69", ip: "192.168.39.168"}
	I0802 18:49:50.553733   58307 main.go:141] libmachine: (no-preload-407306) DBG | Getting to WaitForSSH function...
	I0802 18:49:50.555684   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.556042   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.556070   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.556192   58307 main.go:141] libmachine: (no-preload-407306) DBG | Using SSH client type: external
	I0802 18:49:50.556215   58307 main.go:141] libmachine: (no-preload-407306) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/no-preload-407306/id_rsa (-rw-------)
	I0802 18:49:50.556245   58307 main.go:141] libmachine: (no-preload-407306) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/no-preload-407306/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 18:49:50.556264   58307 main.go:141] libmachine: (no-preload-407306) DBG | About to run SSH command:
	I0802 18:49:50.556280   58307 main.go:141] libmachine: (no-preload-407306) DBG | exit 0
	I0802 18:49:50.679205   58307 main.go:141] libmachine: (no-preload-407306) DBG | SSH cmd err, output: <nil>: 
	I0802 18:49:50.679609   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetConfigRaw
	I0802 18:49:50.680249   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetIP
	I0802 18:49:50.683007   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.683366   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.683396   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.683575   58307 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/no-preload-407306/config.json ...
	I0802 18:49:50.683850   58307 machine.go:94] provisionDockerMachine start ...
	I0802 18:49:50.683881   58307 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	I0802 18:49:50.684087   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:50.686447   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.686816   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.686842   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.686981   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:49:50.687186   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.687393   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.687560   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:49:50.687758   58307 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:50.687913   58307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0802 18:49:50.687923   58307 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:49:50.791371   58307 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0802 18:49:50.791395   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetMachineName
	I0802 18:49:50.791626   58307 buildroot.go:166] provisioning hostname "no-preload-407306"
	I0802 18:49:50.791647   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetMachineName
	I0802 18:49:50.791861   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:50.794279   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.794606   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.794646   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.794774   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:49:50.794952   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.795070   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.795234   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:49:50.795399   58307 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:50.795615   58307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0802 18:49:50.795634   58307 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-407306 && echo "no-preload-407306" | sudo tee /etc/hostname
	I0802 18:49:50.911657   58307 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-407306
	
	I0802 18:49:50.911690   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:50.915360   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.915752   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:50.915776   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:50.915982   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:49:50.916222   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.916422   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:50.916590   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:49:50.916826   58307 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:50.917040   58307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0802 18:49:50.917067   58307 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-407306' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-407306/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-407306' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:49:51.027987   58307 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:49:51.028024   58307 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:49:51.028057   58307 buildroot.go:174] setting up certificates
	I0802 18:49:51.028075   58307 provision.go:84] configureAuth start
	I0802 18:49:51.028089   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetMachineName
	I0802 18:49:51.028375   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetIP
	I0802 18:49:51.031265   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.031756   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:51.031794   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.031922   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:51.034918   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.035346   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:51.035372   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.035476   58307 provision.go:143] copyHostCerts
	I0802 18:49:51.035545   58307 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:49:51.035559   58307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:49:51.035627   58307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:49:51.035764   58307 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:49:51.035775   58307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:49:51.035812   58307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:49:51.035902   58307 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:49:51.035913   58307 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:49:51.035942   58307 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:49:51.036022   58307 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.no-preload-407306 san=[127.0.0.1 192.168.39.168 localhost minikube no-preload-407306]
	I0802 18:49:51.168560   58307 provision.go:177] copyRemoteCerts
	I0802 18:49:51.168618   58307 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:49:51.168644   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:51.171295   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.171647   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:51.171677   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.171833   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:49:51.172034   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:51.172211   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:49:51.172360   58307 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/no-preload-407306/id_rsa Username:docker}
	I0802 18:49:51.258043   58307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:49:51.280372   58307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0802 18:49:51.304010   58307 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:49:51.327808   58307 provision.go:87] duration metric: took 299.720899ms to configureAuth
	I0802 18:49:51.327839   58307 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:49:51.328049   58307 config.go:182] Loaded profile config "no-preload-407306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 18:49:51.328146   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 18:49:51.330357   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.330649   58307 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 18:49:51.330674   58307 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 18:49:51.330856   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 18:49:51.331077   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:51.331298   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 18:49:51.331481   58307 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 18:49:51.331682   58307 main.go:141] libmachine: Using SSH client type: native
	I0802 18:49:51.331870   58307 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.168 22 <nil> <nil>}
	I0802 18:49:51.331890   58307 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:49:51.490654   58307 main.go:141] libmachine: SSH cmd err, output: Process exited with status 1: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0802 18:49:51.490694   58307 buildroot.go:191] Error setting container-runtime options during provisioning ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	I0802 18:49:51.490702   58307 machine.go:97] duration metric: took 806.832499ms to provisionDockerMachine
	I0802 18:49:51.490726   58307 fix.go:56] duration metric: took 19.750567005s for fixHost
	I0802 18:49:51.490731   58307 start.go:83] releasing machines lock for "no-preload-407306", held for 19.750607176s
	W0802 18:49:51.490806   58307 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p no-preload-407306" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	I0802 18:49:51.493574   58307 out.go:177] 
	W0802 18:49:51.494913   58307 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	W0802 18:49:51.494934   58307 out.go:239] * 
	W0802 18:49:51.495792   58307 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:49:51.497754   58307 out.go:177] 
	
	
	==> CRI-O <==
	Aug 02 18:49:43 minikube systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:43 minikube systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 02 18:49:51 no-preload-407306 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:51 no-preload-407306 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:49:52Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:49:52Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0802 18:49:52.479838     477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 18:49:52.481524     477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 18:49:52.483128     477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 18:49:52.484736     477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 18:49:52.486306     477 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 2 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052268] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038133] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.175966] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.956805] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +0.895840] overlayfs: failed to resolve '/var/lib/containers/storage/overlay/compat441482906/lower1': -2
	[  +0.695966] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	
	
	==> kernel <==
	 18:49:52 up 0 min,  0 users,  load average: 0.27, 0.06, 0.02
	Linux no-preload-407306 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:49:52.039876   60784 logs.go:273] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:49:52Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:49:52Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:49:52.076675   60784 logs.go:273] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:49:52Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:49:52Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:49:52.114042   60784 logs.go:273] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:49:52Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:49:52Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:49:52.150582   60784 logs.go:273] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:49:52Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:49:52Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:49:52.181563   60784 logs.go:273] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:49:52Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:49:52Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:49:52.210949   60784 logs.go:273] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:49:52Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:49:52Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:49:52.249695   60784 logs.go:273] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:49:52Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:49:52Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:49:52.283013   60784 logs.go:273] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:49:52Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:49:52Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:49:52Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306: exit status 2 (222.358472ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "no-preload-407306" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (361.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (767.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-490984 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-490984 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m45.670823478s)

                                                
                                                
-- stdout --
	* [old-k8s-version-490984] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-490984" primary control-plane node in "old-k8s-version-490984" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-490984" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:44:11.412559   58571 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:44:11.412707   58571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:44:11.412719   58571 out.go:304] Setting ErrFile to fd 2...
	I0802 18:44:11.412726   58571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:44:11.413017   58571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:44:11.413743   58571 out.go:298] Setting JSON to false
	I0802 18:44:11.415123   58571 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5195,"bootTime":1722619056,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:44:11.415225   58571 start.go:139] virtualization: kvm guest
	I0802 18:44:11.417504   58571 out.go:177] * [old-k8s-version-490984] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:44:11.418843   58571 notify.go:220] Checking for updates...
	I0802 18:44:11.418864   58571 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:44:11.420214   58571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:44:11.421673   58571 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:44:11.422912   58571 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:44:11.424180   58571 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:44:11.425555   58571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:44:11.427472   58571 config.go:182] Loaded profile config "old-k8s-version-490984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0802 18:44:11.428054   58571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:44:11.428111   58571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:44:11.445611   58571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
	I0802 18:44:11.446007   58571 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:44:11.446562   58571 main.go:141] libmachine: Using API Version  1
	I0802 18:44:11.446590   58571 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:44:11.446896   58571 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:44:11.447077   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:44:11.449000   58571 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0802 18:44:11.450254   58571 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:44:11.450547   58571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:44:11.450586   58571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:44:11.465250   58571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39951
	I0802 18:44:11.465683   58571 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:44:11.466169   58571 main.go:141] libmachine: Using API Version  1
	I0802 18:44:11.466197   58571 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:44:11.466487   58571 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:44:11.466645   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:44:11.501732   58571 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:44:11.502978   58571 start.go:297] selected driver: kvm2
	I0802 18:44:11.502990   58571 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-490984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:44:11.503096   58571 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:44:11.503760   58571 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:44:11.503848   58571 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:44:11.519830   58571 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:44:11.520220   58571 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:44:11.520247   58571 cni.go:84] Creating CNI manager for ""
	I0802 18:44:11.520255   58571 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:44:11.520304   58571 start.go:340] cluster config:
	{Name:old-k8s-version-490984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490984 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:44:11.520436   58571 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:44:11.522220   58571 out.go:177] * Starting "old-k8s-version-490984" primary control-plane node in "old-k8s-version-490984" cluster
	I0802 18:44:11.523471   58571 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0802 18:44:11.523509   58571 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0802 18:44:11.523531   58571 cache.go:56] Caching tarball of preloaded images
	I0802 18:44:11.523614   58571 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:44:11.523627   58571 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0802 18:44:11.523769   58571 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/config.json ...
	I0802 18:44:11.523986   58571 start.go:360] acquireMachinesLock for old-k8s-version-490984: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:48:29.271902   58571 start.go:364] duration metric: took 4m17.747886721s to acquireMachinesLock for "old-k8s-version-490984"
	I0802 18:48:29.271958   58571 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:48:29.271963   58571 fix.go:54] fixHost starting: 
	I0802 18:48:29.272266   58571 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:48:29.272294   58571 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:48:29.287602   58571 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
	I0802 18:48:29.288060   58571 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:48:29.288528   58571 main.go:141] libmachine: Using API Version  1
	I0802 18:48:29.288545   58571 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:48:29.288857   58571 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:48:29.289034   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:29.289174   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetState
	I0802 18:48:29.291032   58571 fix.go:112] recreateIfNeeded on old-k8s-version-490984: state=Stopped err=<nil>
	I0802 18:48:29.291053   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	W0802 18:48:29.291239   58571 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:48:29.293011   58571 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-490984" ...
	I0802 18:48:29.294248   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .Start
	I0802 18:48:29.294418   58571 main.go:141] libmachine: (old-k8s-version-490984) Ensuring networks are active...
	I0802 18:48:29.295224   58571 main.go:141] libmachine: (old-k8s-version-490984) Ensuring network default is active
	I0802 18:48:29.295661   58571 main.go:141] libmachine: (old-k8s-version-490984) Ensuring network mk-old-k8s-version-490984 is active
	I0802 18:48:29.296018   58571 main.go:141] libmachine: (old-k8s-version-490984) Getting domain xml...
	I0802 18:48:29.296974   58571 main.go:141] libmachine: (old-k8s-version-490984) Creating domain...
	I0802 18:48:30.503712   58571 main.go:141] libmachine: (old-k8s-version-490984) Waiting to get IP...
	I0802 18:48:30.504530   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:30.504922   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:30.504996   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:30.504910   59906 retry.go:31] will retry after 307.580681ms: waiting for machine to come up
	I0802 18:48:30.814553   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:30.814985   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:30.815020   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:30.814914   59906 retry.go:31] will retry after 243.906736ms: waiting for machine to come up
	I0802 18:48:31.060406   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:31.060854   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:31.060880   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:31.060820   59906 retry.go:31] will retry after 392.162755ms: waiting for machine to come up
	I0802 18:48:31.454321   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:31.454706   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:31.454733   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:31.454658   59906 retry.go:31] will retry after 424.820425ms: waiting for machine to come up
	I0802 18:48:31.881487   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:31.881988   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:31.882111   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:31.881954   59906 retry.go:31] will retry after 460.627573ms: waiting for machine to come up
	I0802 18:48:32.344538   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:32.344949   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:32.344978   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:32.344903   59906 retry.go:31] will retry after 589.234832ms: waiting for machine to come up
	I0802 18:48:32.935791   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:32.936157   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:32.936178   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:32.936141   59906 retry.go:31] will retry after 1.009164478s: waiting for machine to come up
	I0802 18:48:33.947364   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:33.947865   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:33.947888   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:33.947816   59906 retry.go:31] will retry after 1.052111058s: waiting for machine to come up
	I0802 18:48:35.001504   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:35.001985   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:35.002018   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:35.001932   59906 retry.go:31] will retry after 1.343846495s: waiting for machine to come up
	I0802 18:48:36.347528   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:36.347869   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:36.347921   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:36.347855   59906 retry.go:31] will retry after 1.919219744s: waiting for machine to come up
	I0802 18:48:38.269875   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:38.270312   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:38.270341   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:38.270293   59906 retry.go:31] will retry after 2.307222377s: waiting for machine to come up
	I0802 18:48:40.579469   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:40.579904   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:40.579936   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:40.579851   59906 retry.go:31] will retry after 2.436290529s: waiting for machine to come up
	I0802 18:48:43.019426   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:43.019804   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | unable to find current IP address of domain old-k8s-version-490984 in network mk-old-k8s-version-490984
	I0802 18:48:43.019843   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | I0802 18:48:43.019767   59906 retry.go:31] will retry after 3.69539651s: waiting for machine to come up
	I0802 18:48:46.717837   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.718393   58571 main.go:141] libmachine: (old-k8s-version-490984) Found IP for machine: 192.168.50.104
	I0802 18:48:46.718419   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has current primary IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.718431   58571 main.go:141] libmachine: (old-k8s-version-490984) Reserving static IP address...
	I0802 18:48:46.718839   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "old-k8s-version-490984", mac: "52:54:00:e1:cb:7a", ip: "192.168.50.104"} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:46.718865   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | skip adding static IP to network mk-old-k8s-version-490984 - found existing host DHCP lease matching {name: "old-k8s-version-490984", mac: "52:54:00:e1:cb:7a", ip: "192.168.50.104"}
	I0802 18:48:46.718875   58571 main.go:141] libmachine: (old-k8s-version-490984) Reserved static IP address: 192.168.50.104
	I0802 18:48:46.718889   58571 main.go:141] libmachine: (old-k8s-version-490984) Waiting for SSH to be available...
	I0802 18:48:46.718898   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | Getting to WaitForSSH function...
	I0802 18:48:46.720922   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.721259   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:46.721296   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.721420   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | Using SSH client type: external
	I0802 18:48:46.721445   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa (-rw-------)
	I0802 18:48:46.721482   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 18:48:46.721546   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | About to run SSH command:
	I0802 18:48:46.721568   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | exit 0
	I0802 18:48:46.842782   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | SSH cmd err, output: <nil>: 
	I0802 18:48:46.843151   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetConfigRaw
	I0802 18:48:46.843733   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:48:46.846029   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.846320   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:46.846348   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.846618   58571 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/config.json ...
	I0802 18:48:46.846797   58571 machine.go:94] provisionDockerMachine start ...
	I0802 18:48:46.846814   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:46.847004   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:46.849141   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.849499   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:46.849523   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.849670   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:46.849858   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:46.849992   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:46.850123   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:46.850301   58571 main.go:141] libmachine: Using SSH client type: native
	I0802 18:48:46.850484   58571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:48:46.850495   58571 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:48:46.947427   58571 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0802 18:48:46.947456   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetMachineName
	I0802 18:48:46.947690   58571 buildroot.go:166] provisioning hostname "old-k8s-version-490984"
	I0802 18:48:46.947726   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetMachineName
	I0802 18:48:46.947927   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:46.950710   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.951067   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:46.951094   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:46.951396   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:46.951565   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:46.951738   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:46.951887   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:46.952038   58571 main.go:141] libmachine: Using SSH client type: native
	I0802 18:48:46.952217   58571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:48:46.952229   58571 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-490984 && echo "old-k8s-version-490984" | sudo tee /etc/hostname
	I0802 18:48:47.060408   58571 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-490984
	
	I0802 18:48:47.060435   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.063083   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.063461   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.063492   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.063610   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:47.063787   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.063934   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.064129   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:47.064331   58571 main.go:141] libmachine: Using SSH client type: native
	I0802 18:48:47.064502   58571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:48:47.064518   58571 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-490984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-490984/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-490984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 18:48:47.166680   58571 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:48:47.166724   58571 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 18:48:47.166749   58571 buildroot.go:174] setting up certificates
	I0802 18:48:47.166759   58571 provision.go:84] configureAuth start
	I0802 18:48:47.166770   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetMachineName
	I0802 18:48:47.167085   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:48:47.169842   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.170244   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.170279   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.170424   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.172587   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.172942   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.172972   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.173081   58571 provision.go:143] copyHostCerts
	I0802 18:48:47.173130   58571 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 18:48:47.173142   58571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 18:48:47.173210   58571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 18:48:47.173305   58571 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 18:48:47.173313   58571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 18:48:47.173339   58571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 18:48:47.173408   58571 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 18:48:47.173416   58571 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 18:48:47.173438   58571 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 18:48:47.173504   58571 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-490984 san=[127.0.0.1 192.168.50.104 localhost minikube old-k8s-version-490984]
	I0802 18:48:47.397577   58571 provision.go:177] copyRemoteCerts
	I0802 18:48:47.397633   58571 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 18:48:47.397657   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.400444   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.400761   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.400789   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.400911   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:47.401126   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.401305   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:47.401451   58571 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:48:47.477120   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 18:48:47.499051   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 18:48:47.520431   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0802 18:48:47.541493   58571 provision.go:87] duration metric: took 374.722098ms to configureAuth
	I0802 18:48:47.541523   58571 buildroot.go:189] setting minikube options for container-runtime
	I0802 18:48:47.541731   58571 config.go:182] Loaded profile config "old-k8s-version-490984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0802 18:48:47.541819   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.544555   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.544903   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.544939   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.545047   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:47.545256   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.545421   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.545543   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:47.545707   58571 main.go:141] libmachine: Using SSH client type: native
	I0802 18:48:47.545852   58571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:48:47.545866   58571 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 18:48:47.793135   58571 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 18:48:47.793168   58571 machine.go:97] duration metric: took 946.358268ms to provisionDockerMachine
	I0802 18:48:47.793188   58571 start.go:293] postStartSetup for "old-k8s-version-490984" (driver="kvm2")
	I0802 18:48:47.793200   58571 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 18:48:47.793239   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:47.793602   58571 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 18:48:47.793631   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.796301   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.796747   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.796774   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.796984   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:47.797205   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.797478   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:47.797638   58571 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:48:47.877244   58571 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 18:48:47.881119   58571 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 18:48:47.881157   58571 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 18:48:47.881235   58571 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 18:48:47.881321   58571 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 18:48:47.881417   58571 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 18:48:47.889970   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:48:47.911725   58571 start.go:296] duration metric: took 118.525715ms for postStartSetup
	I0802 18:48:47.911765   58571 fix.go:56] duration metric: took 18.639800216s for fixHost
	I0802 18:48:47.911788   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:47.914229   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.914507   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:47.914536   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:47.914715   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:47.914932   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.915093   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:47.915283   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:47.915426   58571 main.go:141] libmachine: Using SSH client type: native
	I0802 18:48:47.915597   58571 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.104 22 <nil> <nil>}
	I0802 18:48:47.915607   58571 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0802 18:48:48.011471   58571 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722624527.988954809
	
	I0802 18:48:48.011501   58571 fix.go:216] guest clock: 1722624527.988954809
	I0802 18:48:48.011513   58571 fix.go:229] Guest: 2024-08-02 18:48:47.988954809 +0000 UTC Remote: 2024-08-02 18:48:47.911770242 +0000 UTC m=+276.540714762 (delta=77.184567ms)
	I0802 18:48:48.011550   58571 fix.go:200] guest clock delta is within tolerance: 77.184567ms
	I0802 18:48:48.011558   58571 start.go:83] releasing machines lock for "old-k8s-version-490984", held for 18.739614915s
	I0802 18:48:48.011590   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:48.011904   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:48:48.014631   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.015163   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:48.015195   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.015325   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:48.015902   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:48.016099   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .DriverName
	I0802 18:48:48.016197   58571 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 18:48:48.016241   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:48.016326   58571 ssh_runner.go:195] Run: cat /version.json
	I0802 18:48:48.016354   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHHostname
	I0802 18:48:48.019187   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.019391   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.019565   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:48.019588   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.019733   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:48.019794   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:48.019803   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:48.019935   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHPort
	I0802 18:48:48.020016   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:48.020077   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHKeyPath
	I0802 18:48:48.020180   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:48.020268   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetSSHUsername
	I0802 18:48:48.020334   58571 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:48:48.020408   58571 sshutil.go:53] new ssh client: &{IP:192.168.50.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/old-k8s-version-490984/id_rsa Username:docker}
	I0802 18:48:48.125009   58571 ssh_runner.go:195] Run: systemctl --version
	I0802 18:48:48.130495   58571 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 18:48:48.274836   58571 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 18:48:48.280446   58571 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 18:48:48.280517   58571 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 18:48:48.295198   58571 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 18:48:48.295222   58571 start.go:495] detecting cgroup driver to use...
	I0802 18:48:48.295294   58571 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 18:48:48.310716   58571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 18:48:48.324219   58571 docker.go:217] disabling cri-docker service (if available) ...
	I0802 18:48:48.324275   58571 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 18:48:48.337583   58571 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 18:48:48.350509   58571 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 18:48:48.457711   58571 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 18:48:48.613498   58571 docker.go:233] disabling docker service ...
	I0802 18:48:48.613584   58571 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 18:48:48.630221   58571 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 18:48:48.642385   58571 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 18:48:48.781056   58571 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 18:48:48.924495   58571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 18:48:48.938824   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 18:48:48.956224   58571 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0802 18:48:48.956315   58571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:48:48.966431   58571 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 18:48:48.966508   58571 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:48:48.977309   58571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:48:48.987155   58571 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 18:48:48.997040   58571 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 18:48:49.007582   58571 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 18:48:49.017581   58571 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 18:48:49.017641   58571 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 18:48:49.029876   58571 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 18:48:49.040020   58571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:48:49.155163   58571 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 18:48:49.289885   58571 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 18:48:49.289961   58571 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 18:48:49.295125   58571 start.go:563] Will wait 60s for crictl version
	I0802 18:48:49.295185   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:49.298824   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 18:48:49.334988   58571 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 18:48:49.335088   58571 ssh_runner.go:195] Run: crio --version
	I0802 18:48:49.362449   58571 ssh_runner.go:195] Run: crio --version
	I0802 18:48:49.390675   58571 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0802 18:48:49.391954   58571 main.go:141] libmachine: (old-k8s-version-490984) Calling .GetIP
	I0802 18:48:49.395185   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:49.395560   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:cb:7a", ip: ""} in network mk-old-k8s-version-490984: {Iface:virbr2 ExpiryTime:2024-08-02 19:48:39 +0000 UTC Type:0 Mac:52:54:00:e1:cb:7a Iaid: IPaddr:192.168.50.104 Prefix:24 Hostname:old-k8s-version-490984 Clientid:01:52:54:00:e1:cb:7a}
	I0802 18:48:49.395612   58571 main.go:141] libmachine: (old-k8s-version-490984) DBG | domain old-k8s-version-490984 has defined IP address 192.168.50.104 and MAC address 52:54:00:e1:cb:7a in network mk-old-k8s-version-490984
	I0802 18:48:49.395840   58571 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0802 18:48:49.399621   58571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:48:49.411028   58571 kubeadm.go:883] updating cluster {Name:old-k8s-version-490984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 18:48:49.411196   58571 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0802 18:48:49.411332   58571 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:48:49.458890   58571 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0802 18:48:49.458956   58571 ssh_runner.go:195] Run: which lz4
	I0802 18:48:49.462789   58571 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0802 18:48:49.466642   58571 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 18:48:49.466682   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0802 18:48:50.914994   58571 crio.go:462] duration metric: took 1.452253234s to copy over tarball
	I0802 18:48:50.915068   58571 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 18:48:53.707251   58571 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.792154194s)
	I0802 18:48:53.707284   58571 crio.go:469] duration metric: took 2.792264852s to extract the tarball
	I0802 18:48:53.707294   58571 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 18:48:53.749509   58571 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 18:48:53.784343   58571 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0802 18:48:53.784368   58571 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0802 18:48:53.784448   58571 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:48:53.784471   58571 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:48:53.784506   58571 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0802 18:48:53.784530   58571 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:48:53.784555   58571 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:48:53.784504   58571 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:48:53.784511   58571 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:48:53.784471   58571 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0802 18:48:53.786203   58571 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:48:53.786215   58571 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:48:53.786238   58571 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:48:53.786242   58571 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:48:53.786209   58571 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:48:53.786266   58571 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0802 18:48:53.786286   58571 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0802 18:48:53.786309   58571 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:48:54.020645   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0802 18:48:54.055338   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:48:54.060117   58571 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0802 18:48:54.060168   58571 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0802 18:48:54.060212   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.064500   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:48:54.074234   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:48:54.077297   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:48:54.090758   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0802 18:48:54.100361   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0802 18:48:54.118683   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0802 18:48:54.118733   58571 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0802 18:48:54.118769   58571 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:48:54.118810   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.175733   58571 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0802 18:48:54.175785   58571 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:48:54.175839   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.202446   58571 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0802 18:48:54.202499   58571 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:48:54.202501   58571 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0802 18:48:54.202540   58571 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:48:54.202552   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.202580   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.238954   58571 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0802 18:48:54.238998   58571 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0802 18:48:54.239020   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0802 18:48:54.239046   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.239019   58571 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0802 18:48:54.239150   58571 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0802 18:48:54.239179   58571 ssh_runner.go:195] Run: which crictl
	I0802 18:48:54.246523   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0802 18:48:54.246560   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0802 18:48:54.246592   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0802 18:48:54.246629   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0802 18:48:54.251430   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0802 18:48:54.341115   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0802 18:48:54.341176   58571 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0802 18:48:54.353210   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0802 18:48:54.357711   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0802 18:48:54.357793   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0802 18:48:54.357830   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0802 18:48:54.377314   58571 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0802 18:48:54.667926   58571 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 18:48:54.810673   58571 cache_images.go:92] duration metric: took 1.026282543s to LoadCachedImages
	W0802 18:48:54.810786   58571 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19355-5397/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0802 18:48:54.810860   58571 kubeadm.go:934] updating node { 192.168.50.104 8443 v1.20.0 crio true true} ...
	I0802 18:48:54.811043   58571 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-490984 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 18:48:54.811172   58571 ssh_runner.go:195] Run: crio config
	I0802 18:48:54.858477   58571 cni.go:84] Creating CNI manager for ""
	I0802 18:48:54.858501   58571 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:48:54.858513   58571 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 18:48:54.858548   58571 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.104 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-490984 NodeName:old-k8s-version-490984 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0802 18:48:54.858702   58571 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-490984"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 18:48:54.858783   58571 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0802 18:48:54.868766   58571 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 18:48:54.868846   58571 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 18:48:54.878136   58571 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0802 18:48:54.894844   58571 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 18:48:54.910396   58571 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0802 18:48:54.929209   58571 ssh_runner.go:195] Run: grep 192.168.50.104	control-plane.minikube.internal$ /etc/hosts
	I0802 18:48:54.932947   58571 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.104	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 18:48:54.946404   58571 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 18:48:55.063040   58571 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 18:48:55.083216   58571 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984 for IP: 192.168.50.104
	I0802 18:48:55.083252   58571 certs.go:194] generating shared ca certs ...
	I0802 18:48:55.083274   58571 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:48:55.083478   58571 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 18:48:55.083544   58571 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 18:48:55.083564   58571 certs.go:256] generating profile certs ...
	I0802 18:48:55.083692   58571 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/client.key
	I0802 18:48:55.083785   58571 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.key.64198073
	I0802 18:48:55.083847   58571 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.key
	I0802 18:48:55.084009   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 18:48:55.084066   58571 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 18:48:55.084083   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 18:48:55.084124   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 18:48:55.084162   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 18:48:55.084199   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 18:48:55.084267   58571 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 18:48:55.084999   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 18:48:55.128252   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 18:48:55.163809   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 18:48:55.190126   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 18:48:55.219164   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0802 18:48:55.247315   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 18:48:55.297162   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 18:48:55.326070   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 18:48:55.349221   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 18:48:55.371877   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 18:48:55.394715   58571 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 18:48:55.417601   58571 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 18:48:55.433829   58571 ssh_runner.go:195] Run: openssl version
	I0802 18:48:55.439490   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 18:48:55.449897   58571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 18:48:55.454201   58571 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 18:48:55.454259   58571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 18:48:55.459982   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 18:48:55.469959   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 18:48:55.480093   58571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:48:55.484484   58571 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:48:55.484558   58571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 18:48:55.489763   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 18:48:55.500296   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 18:48:55.510694   58571 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 18:48:55.515067   58571 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 18:48:55.515154   58571 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 18:48:55.521358   58571 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 18:48:55.531311   58571 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 18:48:55.536083   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 18:48:55.541867   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 18:48:55.547185   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 18:48:55.552672   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 18:48:55.557817   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 18:48:55.563287   58571 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 18:48:55.568597   58571 kubeadm.go:392] StartCluster: {Name:old-k8s-version-490984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-490984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.104 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:48:55.568699   58571 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 18:48:55.568749   58571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:48:55.612416   58571 cri.go:89] found id: ""
	I0802 18:48:55.612487   58571 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 18:48:55.621919   58571 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0802 18:48:55.621938   58571 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0802 18:48:55.621977   58571 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0802 18:48:55.630826   58571 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0802 18:48:55.631493   58571 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-490984" does not appear in /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:48:55.631838   58571 kubeconfig.go:62] /home/jenkins/minikube-integration/19355-5397/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-490984" cluster setting kubeconfig missing "old-k8s-version-490984" context setting]
	I0802 18:48:55.632363   58571 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 18:48:55.634644   58571 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0802 18:48:55.643386   58571 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.104
	I0802 18:48:55.643416   58571 kubeadm.go:1160] stopping kube-system containers ...
	I0802 18:48:55.643429   58571 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0802 18:48:55.643488   58571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 18:48:55.676501   58571 cri.go:89] found id: ""
	I0802 18:48:55.676577   58571 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0802 18:48:55.692747   58571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:48:55.701664   58571 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:48:55.701686   58571 kubeadm.go:157] found existing configuration files:
	
	I0802 18:48:55.701734   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:48:55.710027   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:48:55.710079   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:48:55.719120   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:48:55.727623   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:48:55.727667   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:48:55.736204   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:48:55.744564   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:48:55.744641   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:48:55.753239   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:48:55.761560   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:48:55.761613   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:48:55.770368   58571 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 18:48:55.779598   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:48:55.893533   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:48:56.864800   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:48:57.089710   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:48:57.184779   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0802 18:48:57.268095   58571 api_server.go:52] waiting for apiserver process to appear ...
	I0802 18:48:57.268190   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:48:57.768972   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:48:58.268488   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:48:58.768518   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:48:59.269207   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:48:59.768438   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:00.269117   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:00.768397   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:01.269091   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:01.769121   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:02.268891   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:02.768679   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:03.269000   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:03.768285   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:04.268702   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:04.768630   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:05.269090   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:05.768354   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:06.268502   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:06.769230   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:07.268885   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:07.769240   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:08.268946   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:08.768824   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:09.269232   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:09.769180   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:10.268960   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:10.768720   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:11.268345   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:11.769141   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:12.268794   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:12.769269   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:13.268381   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:13.768918   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:14.268953   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:14.769249   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:15.268538   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:15.768893   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:16.269173   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:16.769155   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:17.268386   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:17.768359   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:18.269292   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:18.768387   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:19.269201   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:19.768685   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:20.268340   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:20.769157   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:21.268288   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:21.768313   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:22.268845   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:22.769066   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:23.268672   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:23.768752   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:24.268335   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:24.768409   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:25.268773   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:25.768816   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:26.269062   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:26.768485   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:27.269191   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:27.769035   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:28.268999   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:28.768580   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:29.268534   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:29.768543   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:30.268550   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:30.768427   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:31.268562   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:31.768936   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:32.268934   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:32.769268   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:33.268701   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:33.768714   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:34.268342   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:34.769189   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:35.268618   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:35.769096   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:36.269207   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:36.768436   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:37.269059   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:37.769310   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:38.268396   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:38.768735   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:39.269062   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:39.769010   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:40.268815   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:40.768398   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:41.268785   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:41.768380   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:42.268246   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:42.769151   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:43.269202   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:43.768417   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:44.268594   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:44.768407   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:45.269136   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:45.768811   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:46.268264   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:46.768999   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:47.268792   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:47.768553   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:48.268258   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:48.768480   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:49.268353   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:49.768558   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:50.268832   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:50.768778   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:51.268701   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:51.769197   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:52.269029   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:52.768274   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:53.268747   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:53.768405   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:54.268872   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:54.768581   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:55.268596   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:55.768979   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:56.268301   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:56.768464   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:49:57.268610   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:49:57.268709   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:49:57.305214   58571 cri.go:89] found id: ""
	I0802 18:49:57.305238   58571 logs.go:276] 0 containers: []
	W0802 18:49:57.305245   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:49:57.305252   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:49:57.305312   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:49:57.335816   58571 cri.go:89] found id: ""
	I0802 18:49:57.335848   58571 logs.go:276] 0 containers: []
	W0802 18:49:57.335858   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:49:57.335863   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:49:57.335916   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:49:57.369059   58571 cri.go:89] found id: ""
	I0802 18:49:57.369085   58571 logs.go:276] 0 containers: []
	W0802 18:49:57.369093   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:49:57.369099   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:49:57.369149   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:49:57.400793   58571 cri.go:89] found id: ""
	I0802 18:49:57.400818   58571 logs.go:276] 0 containers: []
	W0802 18:49:57.400828   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:49:57.400835   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:49:57.400895   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:49:57.436003   58571 cri.go:89] found id: ""
	I0802 18:49:57.436029   58571 logs.go:276] 0 containers: []
	W0802 18:49:57.436037   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:49:57.436043   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:49:57.436094   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:49:57.471921   58571 cri.go:89] found id: ""
	I0802 18:49:57.471946   58571 logs.go:276] 0 containers: []
	W0802 18:49:57.471953   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:49:57.471959   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:49:57.472018   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:49:57.507986   58571 cri.go:89] found id: ""
	I0802 18:49:57.508017   58571 logs.go:276] 0 containers: []
	W0802 18:49:57.508028   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:49:57.508035   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:49:57.508104   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:49:57.545755   58571 cri.go:89] found id: ""
	I0802 18:49:57.545800   58571 logs.go:276] 0 containers: []
	W0802 18:49:57.545813   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:49:57.545825   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:49:57.545840   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:49:57.595059   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:49:57.595117   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:49:57.608440   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:49:57.608472   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:49:57.726345   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:49:57.726371   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:49:57.726387   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:49:57.789205   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:49:57.789234   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:00.325024   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:00.337900   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:00.337963   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:00.374451   58571 cri.go:89] found id: ""
	I0802 18:50:00.374476   58571 logs.go:276] 0 containers: []
	W0802 18:50:00.374487   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:00.374494   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:00.374561   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:00.413767   58571 cri.go:89] found id: ""
	I0802 18:50:00.413799   58571 logs.go:276] 0 containers: []
	W0802 18:50:00.413808   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:00.413814   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:00.413892   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:00.445192   58571 cri.go:89] found id: ""
	I0802 18:50:00.445230   58571 logs.go:276] 0 containers: []
	W0802 18:50:00.445242   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:00.445250   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:00.445307   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:00.475960   58571 cri.go:89] found id: ""
	I0802 18:50:00.475987   58571 logs.go:276] 0 containers: []
	W0802 18:50:00.475995   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:00.476001   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:00.476070   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:00.515810   58571 cri.go:89] found id: ""
	I0802 18:50:00.515835   58571 logs.go:276] 0 containers: []
	W0802 18:50:00.515843   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:00.515848   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:00.515898   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:00.561359   58571 cri.go:89] found id: ""
	I0802 18:50:00.561388   58571 logs.go:276] 0 containers: []
	W0802 18:50:00.561397   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:00.561403   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:00.561461   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:00.608162   58571 cri.go:89] found id: ""
	I0802 18:50:00.608189   58571 logs.go:276] 0 containers: []
	W0802 18:50:00.608200   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:00.608208   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:00.608267   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:00.639693   58571 cri.go:89] found id: ""
	I0802 18:50:00.639727   58571 logs.go:276] 0 containers: []
	W0802 18:50:00.639738   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:00.639748   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:00.639760   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:00.692689   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:00.692727   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:00.705930   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:00.705954   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:00.779139   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:00.779162   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:00.779174   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:00.843197   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:00.843230   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:03.384967   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:03.397337   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:03.397400   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:03.430450   58571 cri.go:89] found id: ""
	I0802 18:50:03.430474   58571 logs.go:276] 0 containers: []
	W0802 18:50:03.430482   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:03.430489   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:03.430569   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:03.461584   58571 cri.go:89] found id: ""
	I0802 18:50:03.461607   58571 logs.go:276] 0 containers: []
	W0802 18:50:03.461615   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:03.461620   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:03.461687   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:03.493614   58571 cri.go:89] found id: ""
	I0802 18:50:03.493643   58571 logs.go:276] 0 containers: []
	W0802 18:50:03.493651   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:03.493657   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:03.493706   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:03.529039   58571 cri.go:89] found id: ""
	I0802 18:50:03.529065   58571 logs.go:276] 0 containers: []
	W0802 18:50:03.529073   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:03.529080   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:03.529126   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:03.560442   58571 cri.go:89] found id: ""
	I0802 18:50:03.560470   58571 logs.go:276] 0 containers: []
	W0802 18:50:03.560482   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:03.560491   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:03.560561   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:03.592642   58571 cri.go:89] found id: ""
	I0802 18:50:03.592667   58571 logs.go:276] 0 containers: []
	W0802 18:50:03.592675   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:03.592680   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:03.592733   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:03.625268   58571 cri.go:89] found id: ""
	I0802 18:50:03.625292   58571 logs.go:276] 0 containers: []
	W0802 18:50:03.625299   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:03.625305   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:03.625361   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:03.660218   58571 cri.go:89] found id: ""
	I0802 18:50:03.660245   58571 logs.go:276] 0 containers: []
	W0802 18:50:03.660256   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:03.660266   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:03.660282   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:03.695751   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:03.695780   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:03.742361   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:03.742398   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:03.755312   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:03.755347   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:03.819319   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:03.819342   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:03.819355   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:06.388366   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:06.401466   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:06.401533   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:06.436567   58571 cri.go:89] found id: ""
	I0802 18:50:06.436587   58571 logs.go:276] 0 containers: []
	W0802 18:50:06.436596   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:06.436602   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:06.436645   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:06.477173   58571 cri.go:89] found id: ""
	I0802 18:50:06.477198   58571 logs.go:276] 0 containers: []
	W0802 18:50:06.477208   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:06.477215   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:06.477264   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:06.514603   58571 cri.go:89] found id: ""
	I0802 18:50:06.514671   58571 logs.go:276] 0 containers: []
	W0802 18:50:06.514696   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:06.514707   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:06.514769   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:06.552912   58571 cri.go:89] found id: ""
	I0802 18:50:06.552949   58571 logs.go:276] 0 containers: []
	W0802 18:50:06.552959   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:06.552967   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:06.553028   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:06.587513   58571 cri.go:89] found id: ""
	I0802 18:50:06.587534   58571 logs.go:276] 0 containers: []
	W0802 18:50:06.587543   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:06.587550   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:06.587602   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:06.625599   58571 cri.go:89] found id: ""
	I0802 18:50:06.625617   58571 logs.go:276] 0 containers: []
	W0802 18:50:06.625623   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:06.625629   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:06.625669   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:06.663006   58571 cri.go:89] found id: ""
	I0802 18:50:06.663028   58571 logs.go:276] 0 containers: []
	W0802 18:50:06.663038   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:06.663044   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:06.663085   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:06.698118   58571 cri.go:89] found id: ""
	I0802 18:50:06.698136   58571 logs.go:276] 0 containers: []
	W0802 18:50:06.698143   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:06.698151   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:06.698161   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:06.736654   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:06.736680   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:06.794522   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:06.794548   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:06.809298   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:06.809325   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:06.878543   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:06.878566   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:06.878581   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:09.460958   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:09.475281   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:09.475344   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:09.512733   58571 cri.go:89] found id: ""
	I0802 18:50:09.512762   58571 logs.go:276] 0 containers: []
	W0802 18:50:09.512775   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:09.512783   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:09.512851   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:09.547296   58571 cri.go:89] found id: ""
	I0802 18:50:09.547324   58571 logs.go:276] 0 containers: []
	W0802 18:50:09.547335   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:09.547353   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:09.547414   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:09.586162   58571 cri.go:89] found id: ""
	I0802 18:50:09.586188   58571 logs.go:276] 0 containers: []
	W0802 18:50:09.586198   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:09.586206   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:09.586272   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:09.628629   58571 cri.go:89] found id: ""
	I0802 18:50:09.628658   58571 logs.go:276] 0 containers: []
	W0802 18:50:09.628668   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:09.628676   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:09.628738   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:09.665706   58571 cri.go:89] found id: ""
	I0802 18:50:09.665728   58571 logs.go:276] 0 containers: []
	W0802 18:50:09.665740   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:09.665745   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:09.665792   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:09.703374   58571 cri.go:89] found id: ""
	I0802 18:50:09.703402   58571 logs.go:276] 0 containers: []
	W0802 18:50:09.703413   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:09.703426   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:09.703503   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:09.737197   58571 cri.go:89] found id: ""
	I0802 18:50:09.737242   58571 logs.go:276] 0 containers: []
	W0802 18:50:09.737253   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:09.737260   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:09.737324   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:09.771984   58571 cri.go:89] found id: ""
	I0802 18:50:09.772010   58571 logs.go:276] 0 containers: []
	W0802 18:50:09.772019   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:09.772027   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:09.772041   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:09.824519   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:09.824554   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:09.837459   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:09.837486   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:09.911908   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:09.911931   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:09.911946   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:09.993792   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:09.993831   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:12.531046   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:12.543242   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:12.543319   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:12.579001   58571 cri.go:89] found id: ""
	I0802 18:50:12.579029   58571 logs.go:276] 0 containers: []
	W0802 18:50:12.579039   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:12.579044   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:12.579093   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:12.614727   58571 cri.go:89] found id: ""
	I0802 18:50:12.614753   58571 logs.go:276] 0 containers: []
	W0802 18:50:12.614764   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:12.614772   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:12.614872   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:12.650367   58571 cri.go:89] found id: ""
	I0802 18:50:12.650395   58571 logs.go:276] 0 containers: []
	W0802 18:50:12.650406   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:12.650414   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:12.650476   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:12.682765   58571 cri.go:89] found id: ""
	I0802 18:50:12.682793   58571 logs.go:276] 0 containers: []
	W0802 18:50:12.682803   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:12.682810   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:12.682874   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:12.716319   58571 cri.go:89] found id: ""
	I0802 18:50:12.716350   58571 logs.go:276] 0 containers: []
	W0802 18:50:12.716360   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:12.716368   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:12.716439   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:12.748419   58571 cri.go:89] found id: ""
	I0802 18:50:12.748446   58571 logs.go:276] 0 containers: []
	W0802 18:50:12.748456   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:12.748463   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:12.748539   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:12.785974   58571 cri.go:89] found id: ""
	I0802 18:50:12.786001   58571 logs.go:276] 0 containers: []
	W0802 18:50:12.786020   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:12.786028   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:12.786093   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:12.823431   58571 cri.go:89] found id: ""
	I0802 18:50:12.823457   58571 logs.go:276] 0 containers: []
	W0802 18:50:12.823470   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:12.823479   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:12.823492   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:12.886153   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:12.886193   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:12.901867   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:12.901898   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:12.984711   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:12.984735   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:12.984753   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:13.063252   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:13.063285   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:15.607470   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:15.626236   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:15.626316   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:15.666478   58571 cri.go:89] found id: ""
	I0802 18:50:15.666508   58571 logs.go:276] 0 containers: []
	W0802 18:50:15.666519   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:15.666527   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:15.666593   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:15.700283   58571 cri.go:89] found id: ""
	I0802 18:50:15.700329   58571 logs.go:276] 0 containers: []
	W0802 18:50:15.700336   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:15.700342   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:15.700392   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:15.735133   58571 cri.go:89] found id: ""
	I0802 18:50:15.735174   58571 logs.go:276] 0 containers: []
	W0802 18:50:15.735184   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:15.735192   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:15.735320   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:15.768154   58571 cri.go:89] found id: ""
	I0802 18:50:15.768183   58571 logs.go:276] 0 containers: []
	W0802 18:50:15.768194   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:15.768201   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:15.768261   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:15.802126   58571 cri.go:89] found id: ""
	I0802 18:50:15.802154   58571 logs.go:276] 0 containers: []
	W0802 18:50:15.802166   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:15.802173   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:15.802236   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:15.833622   58571 cri.go:89] found id: ""
	I0802 18:50:15.833648   58571 logs.go:276] 0 containers: []
	W0802 18:50:15.833659   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:15.833667   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:15.833735   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:15.865485   58571 cri.go:89] found id: ""
	I0802 18:50:15.865515   58571 logs.go:276] 0 containers: []
	W0802 18:50:15.865528   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:15.865537   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:15.865601   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:15.897489   58571 cri.go:89] found id: ""
	I0802 18:50:15.897516   58571 logs.go:276] 0 containers: []
	W0802 18:50:15.897527   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:15.897538   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:15.897552   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:15.962473   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:15.962507   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:15.977681   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:15.977718   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:16.047733   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:16.047762   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:16.047778   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:16.141328   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:16.141370   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:18.686483   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:18.701616   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:18.701677   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:18.736685   58571 cri.go:89] found id: ""
	I0802 18:50:18.736707   58571 logs.go:276] 0 containers: []
	W0802 18:50:18.736715   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:18.736721   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:18.736782   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:18.772904   58571 cri.go:89] found id: ""
	I0802 18:50:18.772929   58571 logs.go:276] 0 containers: []
	W0802 18:50:18.772940   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:18.772948   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:18.773015   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:18.805739   58571 cri.go:89] found id: ""
	I0802 18:50:18.805765   58571 logs.go:276] 0 containers: []
	W0802 18:50:18.805772   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:18.805778   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:18.805835   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:18.837852   58571 cri.go:89] found id: ""
	I0802 18:50:18.837887   58571 logs.go:276] 0 containers: []
	W0802 18:50:18.837899   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:18.837908   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:18.837982   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:18.870483   58571 cri.go:89] found id: ""
	I0802 18:50:18.870511   58571 logs.go:276] 0 containers: []
	W0802 18:50:18.870519   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:18.870525   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:18.870583   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:18.902749   58571 cri.go:89] found id: ""
	I0802 18:50:18.902779   58571 logs.go:276] 0 containers: []
	W0802 18:50:18.902790   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:18.902799   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:18.902856   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:18.936488   58571 cri.go:89] found id: ""
	I0802 18:50:18.936512   58571 logs.go:276] 0 containers: []
	W0802 18:50:18.936520   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:18.936526   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:18.936574   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:18.968455   58571 cri.go:89] found id: ""
	I0802 18:50:18.968482   58571 logs.go:276] 0 containers: []
	W0802 18:50:18.968491   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:18.968501   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:18.968516   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:19.004611   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:19.004646   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:19.055802   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:19.055834   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:19.069343   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:19.069368   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:19.134231   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:19.134259   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:19.134276   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:21.712307   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:21.725157   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:21.725224   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:21.760417   58571 cri.go:89] found id: ""
	I0802 18:50:21.760445   58571 logs.go:276] 0 containers: []
	W0802 18:50:21.760458   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:21.760465   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:21.760528   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:21.793760   58571 cri.go:89] found id: ""
	I0802 18:50:21.793784   58571 logs.go:276] 0 containers: []
	W0802 18:50:21.793794   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:21.793802   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:21.793858   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:21.826610   58571 cri.go:89] found id: ""
	I0802 18:50:21.826637   58571 logs.go:276] 0 containers: []
	W0802 18:50:21.826648   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:21.826655   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:21.826716   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:21.859338   58571 cri.go:89] found id: ""
	I0802 18:50:21.859367   58571 logs.go:276] 0 containers: []
	W0802 18:50:21.859378   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:21.859385   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:21.859448   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:21.891623   58571 cri.go:89] found id: ""
	I0802 18:50:21.891645   58571 logs.go:276] 0 containers: []
	W0802 18:50:21.891653   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:21.891659   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:21.891726   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:21.924878   58571 cri.go:89] found id: ""
	I0802 18:50:21.924901   58571 logs.go:276] 0 containers: []
	W0802 18:50:21.924910   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:21.924916   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:21.924963   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:21.958290   58571 cri.go:89] found id: ""
	I0802 18:50:21.958319   58571 logs.go:276] 0 containers: []
	W0802 18:50:21.958330   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:21.958337   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:21.958390   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:21.990663   58571 cri.go:89] found id: ""
	I0802 18:50:21.990688   58571 logs.go:276] 0 containers: []
	W0802 18:50:21.990699   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:21.990710   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:21.990726   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:22.065914   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:22.065951   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:22.102153   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:22.102181   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:22.151423   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:22.151459   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:22.164874   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:22.164900   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:22.233001   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:24.733540   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:24.746005   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:24.746065   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:24.777734   58571 cri.go:89] found id: ""
	I0802 18:50:24.777759   58571 logs.go:276] 0 containers: []
	W0802 18:50:24.777770   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:24.777778   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:24.777843   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:24.811150   58571 cri.go:89] found id: ""
	I0802 18:50:24.811178   58571 logs.go:276] 0 containers: []
	W0802 18:50:24.811189   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:24.811197   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:24.811258   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:24.842718   58571 cri.go:89] found id: ""
	I0802 18:50:24.842747   58571 logs.go:276] 0 containers: []
	W0802 18:50:24.842758   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:24.842766   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:24.842821   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:24.874794   58571 cri.go:89] found id: ""
	I0802 18:50:24.874820   58571 logs.go:276] 0 containers: []
	W0802 18:50:24.874830   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:24.874838   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:24.874892   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:24.906811   58571 cri.go:89] found id: ""
	I0802 18:50:24.906842   58571 logs.go:276] 0 containers: []
	W0802 18:50:24.906852   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:24.906859   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:24.906923   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:24.939532   58571 cri.go:89] found id: ""
	I0802 18:50:24.939564   58571 logs.go:276] 0 containers: []
	W0802 18:50:24.939574   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:24.939588   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:24.939651   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:24.973221   58571 cri.go:89] found id: ""
	I0802 18:50:24.973249   58571 logs.go:276] 0 containers: []
	W0802 18:50:24.973258   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:24.973265   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:24.973340   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:25.010296   58571 cri.go:89] found id: ""
	I0802 18:50:25.010322   58571 logs.go:276] 0 containers: []
	W0802 18:50:25.010331   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:25.010340   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:25.010354   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:25.058800   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:25.058826   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:25.072109   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:25.072128   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:25.138257   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:25.138277   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:25.138293   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:25.214501   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:25.214532   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:27.752091   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:27.764846   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:27.764921   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:27.798311   58571 cri.go:89] found id: ""
	I0802 18:50:27.798337   58571 logs.go:276] 0 containers: []
	W0802 18:50:27.798349   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:27.798356   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:27.798419   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:27.834243   58571 cri.go:89] found id: ""
	I0802 18:50:27.834269   58571 logs.go:276] 0 containers: []
	W0802 18:50:27.834279   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:27.834292   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:27.834348   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:27.869157   58571 cri.go:89] found id: ""
	I0802 18:50:27.869189   58571 logs.go:276] 0 containers: []
	W0802 18:50:27.869201   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:27.869209   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:27.869264   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:27.903886   58571 cri.go:89] found id: ""
	I0802 18:50:27.903914   58571 logs.go:276] 0 containers: []
	W0802 18:50:27.903925   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:27.903934   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:27.904002   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:27.939148   58571 cri.go:89] found id: ""
	I0802 18:50:27.939178   58571 logs.go:276] 0 containers: []
	W0802 18:50:27.939190   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:27.939198   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:27.939258   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:27.973674   58571 cri.go:89] found id: ""
	I0802 18:50:27.973697   58571 logs.go:276] 0 containers: []
	W0802 18:50:27.973704   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:27.973710   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:27.973758   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:28.011150   58571 cri.go:89] found id: ""
	I0802 18:50:28.011173   58571 logs.go:276] 0 containers: []
	W0802 18:50:28.011182   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:28.011187   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:28.011243   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:28.048825   58571 cri.go:89] found id: ""
	I0802 18:50:28.048848   58571 logs.go:276] 0 containers: []
	W0802 18:50:28.048859   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:28.048869   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:28.048884   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:28.099229   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:28.099258   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:28.111945   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:28.111970   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:28.179567   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:28.179594   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:28.179609   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:28.260772   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:28.260805   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:30.802658   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:30.815414   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:30.815493   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:30.849372   58571 cri.go:89] found id: ""
	I0802 18:50:30.849402   58571 logs.go:276] 0 containers: []
	W0802 18:50:30.849413   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:30.849420   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:30.849492   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:30.883584   58571 cri.go:89] found id: ""
	I0802 18:50:30.883607   58571 logs.go:276] 0 containers: []
	W0802 18:50:30.883615   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:30.883620   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:30.883679   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:30.922893   58571 cri.go:89] found id: ""
	I0802 18:50:30.922917   58571 logs.go:276] 0 containers: []
	W0802 18:50:30.922927   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:30.922934   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:30.922995   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:30.965740   58571 cri.go:89] found id: ""
	I0802 18:50:30.965768   58571 logs.go:276] 0 containers: []
	W0802 18:50:30.965779   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:30.965787   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:30.965850   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:31.004005   58571 cri.go:89] found id: ""
	I0802 18:50:31.004033   58571 logs.go:276] 0 containers: []
	W0802 18:50:31.004045   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:31.004053   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:31.004136   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:31.044544   58571 cri.go:89] found id: ""
	I0802 18:50:31.044571   58571 logs.go:276] 0 containers: []
	W0802 18:50:31.044582   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:31.044590   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:31.044648   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:31.087563   58571 cri.go:89] found id: ""
	I0802 18:50:31.087588   58571 logs.go:276] 0 containers: []
	W0802 18:50:31.087600   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:31.087607   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:31.087671   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:31.124280   58571 cri.go:89] found id: ""
	I0802 18:50:31.124304   58571 logs.go:276] 0 containers: []
	W0802 18:50:31.124321   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:31.124333   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:31.124348   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:31.176389   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:31.176422   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:31.191990   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:31.192017   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:31.271088   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:31.271128   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:31.271145   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:31.354162   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:31.354200   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:33.893478   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:33.907858   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:33.907930   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:33.950458   58571 cri.go:89] found id: ""
	I0802 18:50:33.950490   58571 logs.go:276] 0 containers: []
	W0802 18:50:33.950503   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:33.950511   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:33.950579   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:33.985429   58571 cri.go:89] found id: ""
	I0802 18:50:33.985454   58571 logs.go:276] 0 containers: []
	W0802 18:50:33.985464   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:33.985472   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:33.985534   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:34.021790   58571 cri.go:89] found id: ""
	I0802 18:50:34.021815   58571 logs.go:276] 0 containers: []
	W0802 18:50:34.021825   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:34.021833   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:34.021895   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:34.058579   58571 cri.go:89] found id: ""
	I0802 18:50:34.058612   58571 logs.go:276] 0 containers: []
	W0802 18:50:34.058622   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:34.058629   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:34.058721   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:34.099340   58571 cri.go:89] found id: ""
	I0802 18:50:34.099366   58571 logs.go:276] 0 containers: []
	W0802 18:50:34.099377   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:34.099385   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:34.099454   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:34.136398   58571 cri.go:89] found id: ""
	I0802 18:50:34.136422   58571 logs.go:276] 0 containers: []
	W0802 18:50:34.136430   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:34.136437   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:34.136505   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:34.175557   58571 cri.go:89] found id: ""
	I0802 18:50:34.175579   58571 logs.go:276] 0 containers: []
	W0802 18:50:34.175591   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:34.175598   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:34.175666   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:34.208754   58571 cri.go:89] found id: ""
	I0802 18:50:34.208780   58571 logs.go:276] 0 containers: []
	W0802 18:50:34.208793   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:34.208809   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:34.208827   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:34.282403   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:34.282434   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:34.282454   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:34.370500   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:34.370540   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:34.406598   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:34.406626   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:34.461617   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:34.461669   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:36.976444   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:36.990238   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:36.990308   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:37.023488   58571 cri.go:89] found id: ""
	I0802 18:50:37.023516   58571 logs.go:276] 0 containers: []
	W0802 18:50:37.023527   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:37.023534   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:37.023603   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:37.055330   58571 cri.go:89] found id: ""
	I0802 18:50:37.055358   58571 logs.go:276] 0 containers: []
	W0802 18:50:37.055369   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:37.055377   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:37.055434   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:37.096393   58571 cri.go:89] found id: ""
	I0802 18:50:37.096418   58571 logs.go:276] 0 containers: []
	W0802 18:50:37.096426   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:37.096432   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:37.096492   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:37.131629   58571 cri.go:89] found id: ""
	I0802 18:50:37.131659   58571 logs.go:276] 0 containers: []
	W0802 18:50:37.131669   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:37.131676   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:37.131741   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:37.166753   58571 cri.go:89] found id: ""
	I0802 18:50:37.166781   58571 logs.go:276] 0 containers: []
	W0802 18:50:37.166792   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:37.166799   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:37.166855   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:37.198008   58571 cri.go:89] found id: ""
	I0802 18:50:37.198040   58571 logs.go:276] 0 containers: []
	W0802 18:50:37.198051   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:37.198058   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:37.198124   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:37.231814   58571 cri.go:89] found id: ""
	I0802 18:50:37.231841   58571 logs.go:276] 0 containers: []
	W0802 18:50:37.231852   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:37.231859   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:37.231921   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:37.266940   58571 cri.go:89] found id: ""
	I0802 18:50:37.266976   58571 logs.go:276] 0 containers: []
	W0802 18:50:37.266989   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:37.267001   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:37.267018   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:37.338708   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:37.338733   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:37.338751   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:37.419277   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:37.419311   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:37.459390   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:37.459415   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:37.515593   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:37.515627   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:40.030974   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:40.043965   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:40.044050   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:40.079559   58571 cri.go:89] found id: ""
	I0802 18:50:40.079585   58571 logs.go:276] 0 containers: []
	W0802 18:50:40.079596   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:40.079604   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:40.079668   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:40.130551   58571 cri.go:89] found id: ""
	I0802 18:50:40.130578   58571 logs.go:276] 0 containers: []
	W0802 18:50:40.130595   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:40.130603   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:40.130671   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:40.172123   58571 cri.go:89] found id: ""
	I0802 18:50:40.172172   58571 logs.go:276] 0 containers: []
	W0802 18:50:40.172184   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:40.172191   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:40.172257   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:40.208525   58571 cri.go:89] found id: ""
	I0802 18:50:40.208553   58571 logs.go:276] 0 containers: []
	W0802 18:50:40.208565   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:40.208573   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:40.208624   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:40.240466   58571 cri.go:89] found id: ""
	I0802 18:50:40.240498   58571 logs.go:276] 0 containers: []
	W0802 18:50:40.240509   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:40.240516   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:40.240580   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:40.277998   58571 cri.go:89] found id: ""
	I0802 18:50:40.278030   58571 logs.go:276] 0 containers: []
	W0802 18:50:40.278042   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:40.278051   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:40.278125   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:40.314777   58571 cri.go:89] found id: ""
	I0802 18:50:40.314809   58571 logs.go:276] 0 containers: []
	W0802 18:50:40.314820   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:40.314827   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:40.314896   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:40.348827   58571 cri.go:89] found id: ""
	I0802 18:50:40.348849   58571 logs.go:276] 0 containers: []
	W0802 18:50:40.348857   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:40.348866   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:40.348878   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:40.394157   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:40.394198   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:40.450322   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:40.450353   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:40.466152   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:40.466181   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:40.538003   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:40.538030   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:40.538047   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:43.113894   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:43.128422   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:43.128490   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:43.172369   58571 cri.go:89] found id: ""
	I0802 18:50:43.172393   58571 logs.go:276] 0 containers: []
	W0802 18:50:43.172401   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:43.172407   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:43.172471   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:43.209524   58571 cri.go:89] found id: ""
	I0802 18:50:43.209552   58571 logs.go:276] 0 containers: []
	W0802 18:50:43.209562   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:43.209570   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:43.209637   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:43.242793   58571 cri.go:89] found id: ""
	I0802 18:50:43.242823   58571 logs.go:276] 0 containers: []
	W0802 18:50:43.242834   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:43.242842   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:43.242904   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:43.281821   58571 cri.go:89] found id: ""
	I0802 18:50:43.281847   58571 logs.go:276] 0 containers: []
	W0802 18:50:43.281857   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:43.281864   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:43.281921   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:43.325116   58571 cri.go:89] found id: ""
	I0802 18:50:43.325149   58571 logs.go:276] 0 containers: []
	W0802 18:50:43.325161   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:43.325168   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:43.325229   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:43.363816   58571 cri.go:89] found id: ""
	I0802 18:50:43.363840   58571 logs.go:276] 0 containers: []
	W0802 18:50:43.363851   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:43.363858   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:43.363913   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:43.405028   58571 cri.go:89] found id: ""
	I0802 18:50:43.405053   58571 logs.go:276] 0 containers: []
	W0802 18:50:43.405063   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:43.405070   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:43.405121   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:43.448082   58571 cri.go:89] found id: ""
	I0802 18:50:43.448104   58571 logs.go:276] 0 containers: []
	W0802 18:50:43.448115   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:43.448125   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:43.448143   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:43.502725   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:43.502755   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:43.518758   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:43.518780   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:43.592023   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:43.592045   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:43.592065   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:43.676414   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:43.676445   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:46.218523   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:46.235606   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:46.235666   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:46.274696   58571 cri.go:89] found id: ""
	I0802 18:50:46.274720   58571 logs.go:276] 0 containers: []
	W0802 18:50:46.274727   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:46.274733   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:46.274793   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:46.317716   58571 cri.go:89] found id: ""
	I0802 18:50:46.317745   58571 logs.go:276] 0 containers: []
	W0802 18:50:46.317756   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:46.317763   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:46.317823   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:46.354691   58571 cri.go:89] found id: ""
	I0802 18:50:46.354722   58571 logs.go:276] 0 containers: []
	W0802 18:50:46.354733   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:46.354740   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:46.354812   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:46.392979   58571 cri.go:89] found id: ""
	I0802 18:50:46.393010   58571 logs.go:276] 0 containers: []
	W0802 18:50:46.393021   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:46.393029   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:46.393093   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:46.426251   58571 cri.go:89] found id: ""
	I0802 18:50:46.426281   58571 logs.go:276] 0 containers: []
	W0802 18:50:46.426291   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:46.426298   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:46.426362   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:46.459011   58571 cri.go:89] found id: ""
	I0802 18:50:46.459039   58571 logs.go:276] 0 containers: []
	W0802 18:50:46.459050   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:46.459058   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:46.459138   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:46.491890   58571 cri.go:89] found id: ""
	I0802 18:50:46.491920   58571 logs.go:276] 0 containers: []
	W0802 18:50:46.491930   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:46.491947   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:46.492011   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:46.541975   58571 cri.go:89] found id: ""
	I0802 18:50:46.542000   58571 logs.go:276] 0 containers: []
	W0802 18:50:46.542012   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:46.542025   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:46.542041   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:46.560477   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:46.560509   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:46.666319   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:46.666350   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:46.666369   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:46.755877   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:46.755918   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:46.794345   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:46.794374   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:49.347694   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:49.361735   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:49.361806   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:49.396443   58571 cri.go:89] found id: ""
	I0802 18:50:49.396469   58571 logs.go:276] 0 containers: []
	W0802 18:50:49.396481   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:49.396491   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:49.396553   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:49.429316   58571 cri.go:89] found id: ""
	I0802 18:50:49.429343   58571 logs.go:276] 0 containers: []
	W0802 18:50:49.429354   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:49.429363   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:49.429440   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:49.467869   58571 cri.go:89] found id: ""
	I0802 18:50:49.467892   58571 logs.go:276] 0 containers: []
	W0802 18:50:49.467900   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:49.467905   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:49.467952   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:49.499520   58571 cri.go:89] found id: ""
	I0802 18:50:49.499546   58571 logs.go:276] 0 containers: []
	W0802 18:50:49.499558   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:49.499566   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:49.499651   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:49.531837   58571 cri.go:89] found id: ""
	I0802 18:50:49.531860   58571 logs.go:276] 0 containers: []
	W0802 18:50:49.531868   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:49.531874   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:49.531925   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:49.563166   58571 cri.go:89] found id: ""
	I0802 18:50:49.563193   58571 logs.go:276] 0 containers: []
	W0802 18:50:49.563202   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:49.563212   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:49.563260   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:49.595951   58571 cri.go:89] found id: ""
	I0802 18:50:49.595976   58571 logs.go:276] 0 containers: []
	W0802 18:50:49.595986   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:49.595994   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:49.596047   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:49.628941   58571 cri.go:89] found id: ""
	I0802 18:50:49.628969   58571 logs.go:276] 0 containers: []
	W0802 18:50:49.628977   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:49.628988   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:49.628999   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:49.704096   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:49.704123   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:49.704139   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:49.795096   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:49.795156   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:49.835860   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:49.835883   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:49.886518   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:49.886549   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:52.401455   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:52.415525   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:52.415581   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:52.448566   58571 cri.go:89] found id: ""
	I0802 18:50:52.448589   58571 logs.go:276] 0 containers: []
	W0802 18:50:52.448600   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:52.448607   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:52.448685   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:52.485674   58571 cri.go:89] found id: ""
	I0802 18:50:52.485705   58571 logs.go:276] 0 containers: []
	W0802 18:50:52.485717   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:52.485724   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:52.485786   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:52.521168   58571 cri.go:89] found id: ""
	I0802 18:50:52.521193   58571 logs.go:276] 0 containers: []
	W0802 18:50:52.521204   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:52.521210   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:52.521274   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:52.554881   58571 cri.go:89] found id: ""
	I0802 18:50:52.554904   58571 logs.go:276] 0 containers: []
	W0802 18:50:52.554912   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:52.554919   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:52.554980   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:52.586592   58571 cri.go:89] found id: ""
	I0802 18:50:52.586619   58571 logs.go:276] 0 containers: []
	W0802 18:50:52.586630   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:52.586638   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:52.586704   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:52.626570   58571 cri.go:89] found id: ""
	I0802 18:50:52.626597   58571 logs.go:276] 0 containers: []
	W0802 18:50:52.626610   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:52.626619   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:52.626678   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:52.662041   58571 cri.go:89] found id: ""
	I0802 18:50:52.662063   58571 logs.go:276] 0 containers: []
	W0802 18:50:52.662070   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:52.662075   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:52.662124   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:52.701841   58571 cri.go:89] found id: ""
	I0802 18:50:52.701865   58571 logs.go:276] 0 containers: []
	W0802 18:50:52.701875   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:52.701887   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:52.701902   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:52.754463   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:52.754496   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:52.767617   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:52.767651   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:52.838017   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:52.838042   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:52.838058   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:52.935321   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:52.935351   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:55.480326   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:55.493202   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:55.493278   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:55.525348   58571 cri.go:89] found id: ""
	I0802 18:50:55.525380   58571 logs.go:276] 0 containers: []
	W0802 18:50:55.525397   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:55.525406   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:55.525472   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:55.558756   58571 cri.go:89] found id: ""
	I0802 18:50:55.558784   58571 logs.go:276] 0 containers: []
	W0802 18:50:55.558796   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:55.558805   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:55.558867   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:55.592420   58571 cri.go:89] found id: ""
	I0802 18:50:55.592449   58571 logs.go:276] 0 containers: []
	W0802 18:50:55.592461   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:55.592470   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:55.592539   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:55.627839   58571 cri.go:89] found id: ""
	I0802 18:50:55.627862   58571 logs.go:276] 0 containers: []
	W0802 18:50:55.627870   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:55.627876   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:55.627930   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:55.663942   58571 cri.go:89] found id: ""
	I0802 18:50:55.663971   58571 logs.go:276] 0 containers: []
	W0802 18:50:55.663982   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:55.663990   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:55.664051   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:55.697729   58571 cri.go:89] found id: ""
	I0802 18:50:55.697762   58571 logs.go:276] 0 containers: []
	W0802 18:50:55.697774   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:55.697783   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:55.697850   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:55.730279   58571 cri.go:89] found id: ""
	I0802 18:50:55.730301   58571 logs.go:276] 0 containers: []
	W0802 18:50:55.730311   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:55.730317   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:55.730382   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:55.768009   58571 cri.go:89] found id: ""
	I0802 18:50:55.768035   58571 logs.go:276] 0 containers: []
	W0802 18:50:55.768046   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:55.768057   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:55.768075   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:55.781122   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:55.781154   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:55.857305   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:55.857326   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:55.857342   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:55.952230   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:55.952267   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:50:55.989675   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:55.989708   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:58.539944   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:50:58.553192   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:50:58.553267   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:50:58.590005   58571 cri.go:89] found id: ""
	I0802 18:50:58.590039   58571 logs.go:276] 0 containers: []
	W0802 18:50:58.590052   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:50:58.590060   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:50:58.590160   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:50:58.624751   58571 cri.go:89] found id: ""
	I0802 18:50:58.624778   58571 logs.go:276] 0 containers: []
	W0802 18:50:58.624798   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:50:58.624805   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:50:58.624877   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:50:58.659477   58571 cri.go:89] found id: ""
	I0802 18:50:58.659514   58571 logs.go:276] 0 containers: []
	W0802 18:50:58.659526   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:50:58.659534   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:50:58.659589   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:50:58.694256   58571 cri.go:89] found id: ""
	I0802 18:50:58.694281   58571 logs.go:276] 0 containers: []
	W0802 18:50:58.694291   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:50:58.694300   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:50:58.694365   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:50:58.728018   58571 cri.go:89] found id: ""
	I0802 18:50:58.728042   58571 logs.go:276] 0 containers: []
	W0802 18:50:58.728052   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:50:58.728060   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:50:58.728114   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:50:58.764905   58571 cri.go:89] found id: ""
	I0802 18:50:58.764934   58571 logs.go:276] 0 containers: []
	W0802 18:50:58.764944   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:50:58.764952   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:50:58.765020   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:50:58.798138   58571 cri.go:89] found id: ""
	I0802 18:50:58.798166   58571 logs.go:276] 0 containers: []
	W0802 18:50:58.798177   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:50:58.798190   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:50:58.798255   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:50:58.831159   58571 cri.go:89] found id: ""
	I0802 18:50:58.831192   58571 logs.go:276] 0 containers: []
	W0802 18:50:58.831206   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:50:58.831218   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:50:58.831235   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:50:58.881683   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:50:58.881720   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:50:58.895075   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:50:58.895125   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:50:58.969514   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:50:58.969551   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:50:58.969568   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:50:59.046704   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:50:59.046742   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:01.588114   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:01.600249   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:01.600322   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:01.632214   58571 cri.go:89] found id: ""
	I0802 18:51:01.632245   58571 logs.go:276] 0 containers: []
	W0802 18:51:01.632252   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:01.632259   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:01.632318   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:01.662153   58571 cri.go:89] found id: ""
	I0802 18:51:01.662182   58571 logs.go:276] 0 containers: []
	W0802 18:51:01.662192   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:01.662200   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:01.662250   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:01.693305   58571 cri.go:89] found id: ""
	I0802 18:51:01.693339   58571 logs.go:276] 0 containers: []
	W0802 18:51:01.693349   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:01.693356   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:01.693416   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:01.724818   58571 cri.go:89] found id: ""
	I0802 18:51:01.724847   58571 logs.go:276] 0 containers: []
	W0802 18:51:01.724858   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:01.724866   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:01.724919   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:01.757525   58571 cri.go:89] found id: ""
	I0802 18:51:01.757547   58571 logs.go:276] 0 containers: []
	W0802 18:51:01.757554   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:01.757560   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:01.757607   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:01.790965   58571 cri.go:89] found id: ""
	I0802 18:51:01.790992   58571 logs.go:276] 0 containers: []
	W0802 18:51:01.791002   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:01.791010   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:01.791057   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:01.823699   58571 cri.go:89] found id: ""
	I0802 18:51:01.823727   58571 logs.go:276] 0 containers: []
	W0802 18:51:01.823734   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:01.823740   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:01.823799   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:01.863183   58571 cri.go:89] found id: ""
	I0802 18:51:01.863214   58571 logs.go:276] 0 containers: []
	W0802 18:51:01.863227   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:01.863240   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:01.863258   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:01.923187   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:01.923223   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:01.938084   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:01.938110   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:02.019977   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:02.020003   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:02.020018   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:02.096247   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:02.096282   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:04.632838   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:04.644972   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:04.645035   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:04.675525   58571 cri.go:89] found id: ""
	I0802 18:51:04.675559   58571 logs.go:276] 0 containers: []
	W0802 18:51:04.675569   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:04.675578   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:04.675642   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:04.706975   58571 cri.go:89] found id: ""
	I0802 18:51:04.707002   58571 logs.go:276] 0 containers: []
	W0802 18:51:04.707012   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:04.707020   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:04.707084   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:04.741106   58571 cri.go:89] found id: ""
	I0802 18:51:04.741133   58571 logs.go:276] 0 containers: []
	W0802 18:51:04.741140   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:04.741146   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:04.741204   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:04.772602   58571 cri.go:89] found id: ""
	I0802 18:51:04.772628   58571 logs.go:276] 0 containers: []
	W0802 18:51:04.772636   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:04.772642   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:04.772688   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:04.804335   58571 cri.go:89] found id: ""
	I0802 18:51:04.804365   58571 logs.go:276] 0 containers: []
	W0802 18:51:04.804377   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:04.804386   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:04.804456   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:04.835857   58571 cri.go:89] found id: ""
	I0802 18:51:04.835886   58571 logs.go:276] 0 containers: []
	W0802 18:51:04.835895   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:04.835901   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:04.835950   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:04.866784   58571 cri.go:89] found id: ""
	I0802 18:51:04.866819   58571 logs.go:276] 0 containers: []
	W0802 18:51:04.866829   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:04.866837   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:04.866897   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:04.899150   58571 cri.go:89] found id: ""
	I0802 18:51:04.899179   58571 logs.go:276] 0 containers: []
	W0802 18:51:04.899189   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:04.899201   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:04.899215   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:04.936230   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:04.936267   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:04.988535   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:04.988563   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:05.002872   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:05.002898   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:05.063954   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:05.063975   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:05.063987   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:07.642692   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:07.655688   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:07.655762   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:07.687636   58571 cri.go:89] found id: ""
	I0802 18:51:07.687666   58571 logs.go:276] 0 containers: []
	W0802 18:51:07.687677   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:07.687685   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:07.687754   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:07.718561   58571 cri.go:89] found id: ""
	I0802 18:51:07.718588   58571 logs.go:276] 0 containers: []
	W0802 18:51:07.718599   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:07.718607   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:07.718671   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:07.749400   58571 cri.go:89] found id: ""
	I0802 18:51:07.749426   58571 logs.go:276] 0 containers: []
	W0802 18:51:07.749435   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:07.749441   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:07.749492   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:07.789099   58571 cri.go:89] found id: ""
	I0802 18:51:07.789123   58571 logs.go:276] 0 containers: []
	W0802 18:51:07.789132   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:07.789139   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:07.789201   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:07.825040   58571 cri.go:89] found id: ""
	I0802 18:51:07.825063   58571 logs.go:276] 0 containers: []
	W0802 18:51:07.825070   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:07.825076   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:07.825129   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:07.855869   58571 cri.go:89] found id: ""
	I0802 18:51:07.855891   58571 logs.go:276] 0 containers: []
	W0802 18:51:07.855898   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:07.855904   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:07.855950   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:07.894374   58571 cri.go:89] found id: ""
	I0802 18:51:07.894418   58571 logs.go:276] 0 containers: []
	W0802 18:51:07.894430   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:07.894436   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:07.894505   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:07.926774   58571 cri.go:89] found id: ""
	I0802 18:51:07.926801   58571 logs.go:276] 0 containers: []
	W0802 18:51:07.926809   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:07.926822   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:07.926833   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:07.981572   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:07.981604   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:07.995283   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:07.995316   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:08.066982   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:08.067003   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:08.067020   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:08.145902   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:08.145937   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:10.680662   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:10.692883   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:10.692957   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:10.724793   58571 cri.go:89] found id: ""
	I0802 18:51:10.724815   58571 logs.go:276] 0 containers: []
	W0802 18:51:10.724822   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:10.724828   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:10.724875   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:10.760099   58571 cri.go:89] found id: ""
	I0802 18:51:10.760121   58571 logs.go:276] 0 containers: []
	W0802 18:51:10.760128   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:10.760134   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:10.760183   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:10.796033   58571 cri.go:89] found id: ""
	I0802 18:51:10.796061   58571 logs.go:276] 0 containers: []
	W0802 18:51:10.796072   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:10.796079   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:10.796130   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:10.828623   58571 cri.go:89] found id: ""
	I0802 18:51:10.828650   58571 logs.go:276] 0 containers: []
	W0802 18:51:10.828662   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:10.828668   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:10.828724   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:10.860412   58571 cri.go:89] found id: ""
	I0802 18:51:10.860435   58571 logs.go:276] 0 containers: []
	W0802 18:51:10.860443   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:10.860448   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:10.860508   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:10.892958   58571 cri.go:89] found id: ""
	I0802 18:51:10.892988   58571 logs.go:276] 0 containers: []
	W0802 18:51:10.892999   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:10.893007   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:10.893074   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:10.928159   58571 cri.go:89] found id: ""
	I0802 18:51:10.928182   58571 logs.go:276] 0 containers: []
	W0802 18:51:10.928189   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:10.928194   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:10.928250   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:10.962169   58571 cri.go:89] found id: ""
	I0802 18:51:10.962193   58571 logs.go:276] 0 containers: []
	W0802 18:51:10.962201   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:10.962210   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:10.962222   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:11.016299   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:11.016332   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:11.029331   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:11.029361   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:11.096014   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:11.096035   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:11.096047   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:11.171562   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:11.171602   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:13.708817   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:13.723738   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:13.723809   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:13.756986   58571 cri.go:89] found id: ""
	I0802 18:51:13.757012   58571 logs.go:276] 0 containers: []
	W0802 18:51:13.757023   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:13.757029   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:13.757090   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:13.788986   58571 cri.go:89] found id: ""
	I0802 18:51:13.789015   58571 logs.go:276] 0 containers: []
	W0802 18:51:13.789027   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:13.789035   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:13.789101   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:13.819980   58571 cri.go:89] found id: ""
	I0802 18:51:13.820005   58571 logs.go:276] 0 containers: []
	W0802 18:51:13.820015   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:13.820022   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:13.820084   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:13.852564   58571 cri.go:89] found id: ""
	I0802 18:51:13.852589   58571 logs.go:276] 0 containers: []
	W0802 18:51:13.852600   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:13.852607   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:13.852670   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:13.885231   58571 cri.go:89] found id: ""
	I0802 18:51:13.885255   58571 logs.go:276] 0 containers: []
	W0802 18:51:13.885265   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:13.885273   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:13.885325   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:13.926137   58571 cri.go:89] found id: ""
	I0802 18:51:13.926160   58571 logs.go:276] 0 containers: []
	W0802 18:51:13.926168   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:13.926176   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:13.926230   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:13.966157   58571 cri.go:89] found id: ""
	I0802 18:51:13.966179   58571 logs.go:276] 0 containers: []
	W0802 18:51:13.966187   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:13.966192   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:13.966242   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:14.001059   58571 cri.go:89] found id: ""
	I0802 18:51:14.001098   58571 logs.go:276] 0 containers: []
	W0802 18:51:14.001109   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:14.001121   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:14.001135   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:14.064400   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:14.064438   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:14.083097   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:14.083161   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:14.161765   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:14.161803   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:14.161821   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:14.246678   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:14.246717   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:16.792258   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:16.810406   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:16.810494   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:16.848688   58571 cri.go:89] found id: ""
	I0802 18:51:16.848719   58571 logs.go:276] 0 containers: []
	W0802 18:51:16.848730   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:16.848738   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:16.848802   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:16.883351   58571 cri.go:89] found id: ""
	I0802 18:51:16.883382   58571 logs.go:276] 0 containers: []
	W0802 18:51:16.883409   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:16.883417   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:16.883479   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:16.936110   58571 cri.go:89] found id: ""
	I0802 18:51:16.936137   58571 logs.go:276] 0 containers: []
	W0802 18:51:16.936147   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:16.936154   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:16.936214   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:16.975449   58571 cri.go:89] found id: ""
	I0802 18:51:16.975478   58571 logs.go:276] 0 containers: []
	W0802 18:51:16.975489   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:16.975498   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:16.975563   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:17.014156   58571 cri.go:89] found id: ""
	I0802 18:51:17.014186   58571 logs.go:276] 0 containers: []
	W0802 18:51:17.014195   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:17.014204   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:17.014265   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:17.057531   58571 cri.go:89] found id: ""
	I0802 18:51:17.057561   58571 logs.go:276] 0 containers: []
	W0802 18:51:17.057573   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:17.057580   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:17.057653   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:17.100754   58571 cri.go:89] found id: ""
	I0802 18:51:17.100778   58571 logs.go:276] 0 containers: []
	W0802 18:51:17.100793   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:17.100800   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:17.100855   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:17.139282   58571 cri.go:89] found id: ""
	I0802 18:51:17.139320   58571 logs.go:276] 0 containers: []
	W0802 18:51:17.139331   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:17.139344   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:17.139360   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:17.203508   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:17.203542   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:17.221488   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:17.221517   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:17.302946   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:17.302970   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:17.302986   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:17.414967   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:17.415005   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:19.953681   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:19.970223   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:19.970322   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:20.015797   58571 cri.go:89] found id: ""
	I0802 18:51:20.015819   58571 logs.go:276] 0 containers: []
	W0802 18:51:20.015830   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:20.015838   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:20.015899   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:20.054275   58571 cri.go:89] found id: ""
	I0802 18:51:20.054302   58571 logs.go:276] 0 containers: []
	W0802 18:51:20.054312   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:20.054319   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:20.054383   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:20.096075   58571 cri.go:89] found id: ""
	I0802 18:51:20.096102   58571 logs.go:276] 0 containers: []
	W0802 18:51:20.096112   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:20.096119   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:20.096178   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:20.129373   58571 cri.go:89] found id: ""
	I0802 18:51:20.129398   58571 logs.go:276] 0 containers: []
	W0802 18:51:20.129408   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:20.129415   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:20.129478   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:20.168048   58571 cri.go:89] found id: ""
	I0802 18:51:20.168080   58571 logs.go:276] 0 containers: []
	W0802 18:51:20.168092   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:20.168101   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:20.168174   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:20.203293   58571 cri.go:89] found id: ""
	I0802 18:51:20.203319   58571 logs.go:276] 0 containers: []
	W0802 18:51:20.203327   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:20.203335   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:20.203404   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:20.241375   58571 cri.go:89] found id: ""
	I0802 18:51:20.241400   58571 logs.go:276] 0 containers: []
	W0802 18:51:20.241408   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:20.241413   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:20.241476   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:20.278766   58571 cri.go:89] found id: ""
	I0802 18:51:20.278795   58571 logs.go:276] 0 containers: []
	W0802 18:51:20.278803   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:20.278812   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:20.278832   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:20.329294   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:20.329332   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:20.342590   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:20.342622   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:20.426178   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:20.426200   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:20.426215   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:20.515974   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:20.516015   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:23.055033   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:23.068791   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:23.068874   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:23.102015   58571 cri.go:89] found id: ""
	I0802 18:51:23.102044   58571 logs.go:276] 0 containers: []
	W0802 18:51:23.102052   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:23.102058   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:23.102123   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:23.137326   58571 cri.go:89] found id: ""
	I0802 18:51:23.137355   58571 logs.go:276] 0 containers: []
	W0802 18:51:23.137366   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:23.137374   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:23.137447   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:23.178539   58571 cri.go:89] found id: ""
	I0802 18:51:23.178563   58571 logs.go:276] 0 containers: []
	W0802 18:51:23.178570   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:23.178576   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:23.178640   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:23.218046   58571 cri.go:89] found id: ""
	I0802 18:51:23.218068   58571 logs.go:276] 0 containers: []
	W0802 18:51:23.218079   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:23.218085   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:23.218146   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:23.255383   58571 cri.go:89] found id: ""
	I0802 18:51:23.255412   58571 logs.go:276] 0 containers: []
	W0802 18:51:23.255420   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:23.255426   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:23.255486   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:23.290314   58571 cri.go:89] found id: ""
	I0802 18:51:23.290340   58571 logs.go:276] 0 containers: []
	W0802 18:51:23.290347   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:23.290353   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:23.290402   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:23.324147   58571 cri.go:89] found id: ""
	I0802 18:51:23.324187   58571 logs.go:276] 0 containers: []
	W0802 18:51:23.324196   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:23.324201   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:23.324252   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:23.363255   58571 cri.go:89] found id: ""
	I0802 18:51:23.363279   58571 logs.go:276] 0 containers: []
	W0802 18:51:23.363286   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:23.363295   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:23.363313   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:23.403159   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:23.403188   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:23.452208   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:23.452245   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:23.465906   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:23.465935   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:23.538811   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:23.538838   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:23.538855   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:26.117152   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:26.130719   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:26.130802   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:26.164754   58571 cri.go:89] found id: ""
	I0802 18:51:26.164774   58571 logs.go:276] 0 containers: []
	W0802 18:51:26.164781   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:26.164787   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:26.164839   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:26.202459   58571 cri.go:89] found id: ""
	I0802 18:51:26.202485   58571 logs.go:276] 0 containers: []
	W0802 18:51:26.202492   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:26.202498   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:26.202555   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:26.236449   58571 cri.go:89] found id: ""
	I0802 18:51:26.236473   58571 logs.go:276] 0 containers: []
	W0802 18:51:26.236484   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:26.236491   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:26.236554   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:26.268835   58571 cri.go:89] found id: ""
	I0802 18:51:26.268868   58571 logs.go:276] 0 containers: []
	W0802 18:51:26.268880   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:26.268888   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:26.268954   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:26.309691   58571 cri.go:89] found id: ""
	I0802 18:51:26.309717   58571 logs.go:276] 0 containers: []
	W0802 18:51:26.309734   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:26.309742   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:26.309799   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:26.341211   58571 cri.go:89] found id: ""
	I0802 18:51:26.341243   58571 logs.go:276] 0 containers: []
	W0802 18:51:26.341254   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:26.341261   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:26.341328   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:26.397805   58571 cri.go:89] found id: ""
	I0802 18:51:26.397825   58571 logs.go:276] 0 containers: []
	W0802 18:51:26.397833   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:26.397839   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:26.397898   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:26.431672   58571 cri.go:89] found id: ""
	I0802 18:51:26.431700   58571 logs.go:276] 0 containers: []
	W0802 18:51:26.431710   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:26.431722   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:26.431740   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:26.446836   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:26.446862   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:26.524404   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:26.524427   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:26.524442   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:26.624547   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:26.624581   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:26.664888   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:26.664923   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:29.225157   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:29.238603   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:29.238685   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:29.272517   58571 cri.go:89] found id: ""
	I0802 18:51:29.272557   58571 logs.go:276] 0 containers: []
	W0802 18:51:29.272570   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:29.272578   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:29.272639   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:29.307603   58571 cri.go:89] found id: ""
	I0802 18:51:29.307624   58571 logs.go:276] 0 containers: []
	W0802 18:51:29.307632   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:29.307637   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:29.307695   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:29.344050   58571 cri.go:89] found id: ""
	I0802 18:51:29.344085   58571 logs.go:276] 0 containers: []
	W0802 18:51:29.344096   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:29.344105   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:29.344166   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:29.378597   58571 cri.go:89] found id: ""
	I0802 18:51:29.378624   58571 logs.go:276] 0 containers: []
	W0802 18:51:29.378632   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:29.378638   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:29.378696   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:29.414234   58571 cri.go:89] found id: ""
	I0802 18:51:29.414256   58571 logs.go:276] 0 containers: []
	W0802 18:51:29.414265   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:29.414270   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:29.414332   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:29.451639   58571 cri.go:89] found id: ""
	I0802 18:51:29.451665   58571 logs.go:276] 0 containers: []
	W0802 18:51:29.451672   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:29.451678   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:29.451738   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:29.486182   58571 cri.go:89] found id: ""
	I0802 18:51:29.486205   58571 logs.go:276] 0 containers: []
	W0802 18:51:29.486212   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:29.486229   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:29.486291   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:29.521912   58571 cri.go:89] found id: ""
	I0802 18:51:29.521945   58571 logs.go:276] 0 containers: []
	W0802 18:51:29.521957   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:29.521969   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:29.521985   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:29.602438   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:29.602463   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:29.647858   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:29.647891   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:29.702527   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:29.702565   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:29.717217   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:29.717249   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:29.788530   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:32.289557   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:32.302589   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:32.302650   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:32.339474   58571 cri.go:89] found id: ""
	I0802 18:51:32.339504   58571 logs.go:276] 0 containers: []
	W0802 18:51:32.339515   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:32.339523   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:32.339589   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:32.381019   58571 cri.go:89] found id: ""
	I0802 18:51:32.381056   58571 logs.go:276] 0 containers: []
	W0802 18:51:32.381067   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:32.381074   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:32.381141   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:32.417182   58571 cri.go:89] found id: ""
	I0802 18:51:32.417209   58571 logs.go:276] 0 containers: []
	W0802 18:51:32.417219   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:32.417227   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:32.417289   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:32.450348   58571 cri.go:89] found id: ""
	I0802 18:51:32.450370   58571 logs.go:276] 0 containers: []
	W0802 18:51:32.450377   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:32.450382   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:32.450447   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:32.485495   58571 cri.go:89] found id: ""
	I0802 18:51:32.485522   58571 logs.go:276] 0 containers: []
	W0802 18:51:32.485533   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:32.485546   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:32.485610   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:32.518804   58571 cri.go:89] found id: ""
	I0802 18:51:32.518841   58571 logs.go:276] 0 containers: []
	W0802 18:51:32.518854   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:32.518863   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:32.518939   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:32.553137   58571 cri.go:89] found id: ""
	I0802 18:51:32.553164   58571 logs.go:276] 0 containers: []
	W0802 18:51:32.553172   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:32.553178   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:32.553235   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:32.590516   58571 cri.go:89] found id: ""
	I0802 18:51:32.590542   58571 logs.go:276] 0 containers: []
	W0802 18:51:32.590553   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:32.590564   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:32.590579   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:32.638745   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:32.638783   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:32.651459   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:32.651486   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:32.719753   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:32.719776   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:32.719791   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:32.806065   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:32.806103   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:35.348320   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:35.361361   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:35.361442   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:35.392884   58571 cri.go:89] found id: ""
	I0802 18:51:35.392913   58571 logs.go:276] 0 containers: []
	W0802 18:51:35.392920   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:35.392926   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:35.392976   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:35.425419   58571 cri.go:89] found id: ""
	I0802 18:51:35.425444   58571 logs.go:276] 0 containers: []
	W0802 18:51:35.425453   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:35.425458   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:35.425525   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:35.457451   58571 cri.go:89] found id: ""
	I0802 18:51:35.457489   58571 logs.go:276] 0 containers: []
	W0802 18:51:35.457500   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:35.457507   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:35.457572   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:35.489070   58571 cri.go:89] found id: ""
	I0802 18:51:35.489098   58571 logs.go:276] 0 containers: []
	W0802 18:51:35.489116   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:35.489123   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:35.489189   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:35.519871   58571 cri.go:89] found id: ""
	I0802 18:51:35.519900   58571 logs.go:276] 0 containers: []
	W0802 18:51:35.519912   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:35.519920   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:35.519992   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:35.558406   58571 cri.go:89] found id: ""
	I0802 18:51:35.558432   58571 logs.go:276] 0 containers: []
	W0802 18:51:35.558443   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:35.558451   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:35.558517   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:35.592567   58571 cri.go:89] found id: ""
	I0802 18:51:35.592592   58571 logs.go:276] 0 containers: []
	W0802 18:51:35.592600   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:35.592606   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:35.592692   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:35.627249   58571 cri.go:89] found id: ""
	I0802 18:51:35.627278   58571 logs.go:276] 0 containers: []
	W0802 18:51:35.627292   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:35.627304   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:35.627318   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:35.674482   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:35.674517   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:35.687941   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:35.687971   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:35.751264   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:35.751290   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:35.751306   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:35.827519   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:35.827556   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:38.373671   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:38.386305   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:38.386374   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:38.423058   58571 cri.go:89] found id: ""
	I0802 18:51:38.423087   58571 logs.go:276] 0 containers: []
	W0802 18:51:38.423094   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:38.423115   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:38.423167   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:38.455749   58571 cri.go:89] found id: ""
	I0802 18:51:38.455774   58571 logs.go:276] 0 containers: []
	W0802 18:51:38.455782   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:38.455787   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:38.455844   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:38.509225   58571 cri.go:89] found id: ""
	I0802 18:51:38.509249   58571 logs.go:276] 0 containers: []
	W0802 18:51:38.509256   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:38.509262   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:38.509324   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:38.552620   58571 cri.go:89] found id: ""
	I0802 18:51:38.552647   58571 logs.go:276] 0 containers: []
	W0802 18:51:38.552657   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:38.552665   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:38.552736   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:38.607287   58571 cri.go:89] found id: ""
	I0802 18:51:38.607308   58571 logs.go:276] 0 containers: []
	W0802 18:51:38.607316   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:38.607323   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:38.607386   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:38.639058   58571 cri.go:89] found id: ""
	I0802 18:51:38.639086   58571 logs.go:276] 0 containers: []
	W0802 18:51:38.639096   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:38.639122   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:38.639191   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:38.670075   58571 cri.go:89] found id: ""
	I0802 18:51:38.670099   58571 logs.go:276] 0 containers: []
	W0802 18:51:38.670108   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:38.670115   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:38.670178   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:38.701179   58571 cri.go:89] found id: ""
	I0802 18:51:38.701207   58571 logs.go:276] 0 containers: []
	W0802 18:51:38.701218   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:38.701229   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:38.701246   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:38.770882   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:38.770902   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:38.770914   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:38.860712   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:38.860749   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:38.897930   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:38.897960   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:38.948238   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:38.948278   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:41.462311   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:41.475433   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:41.475507   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:41.509582   58571 cri.go:89] found id: ""
	I0802 18:51:41.509617   58571 logs.go:276] 0 containers: []
	W0802 18:51:41.509627   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:41.509635   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:41.509701   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:41.542163   58571 cri.go:89] found id: ""
	I0802 18:51:41.542193   58571 logs.go:276] 0 containers: []
	W0802 18:51:41.542204   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:41.542212   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:41.542275   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:41.573186   58571 cri.go:89] found id: ""
	I0802 18:51:41.573208   58571 logs.go:276] 0 containers: []
	W0802 18:51:41.573217   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:41.573223   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:41.573284   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:41.605758   58571 cri.go:89] found id: ""
	I0802 18:51:41.605786   58571 logs.go:276] 0 containers: []
	W0802 18:51:41.605798   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:41.605806   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:41.605865   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:41.636222   58571 cri.go:89] found id: ""
	I0802 18:51:41.636246   58571 logs.go:276] 0 containers: []
	W0802 18:51:41.636265   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:41.636273   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:41.636335   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:41.666948   58571 cri.go:89] found id: ""
	I0802 18:51:41.666977   58571 logs.go:276] 0 containers: []
	W0802 18:51:41.666988   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:41.666995   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:41.667048   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:41.698148   58571 cri.go:89] found id: ""
	I0802 18:51:41.698173   58571 logs.go:276] 0 containers: []
	W0802 18:51:41.698183   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:41.698190   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:41.698255   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:41.730703   58571 cri.go:89] found id: ""
	I0802 18:51:41.730739   58571 logs.go:276] 0 containers: []
	W0802 18:51:41.730749   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:41.730761   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:41.730774   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:41.782805   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:41.782835   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:41.796728   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:41.796750   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:41.869101   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:41.869121   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:41.869142   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:41.946309   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:41.946365   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:44.493947   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:44.506298   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:44.506398   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:44.539071   58571 cri.go:89] found id: ""
	I0802 18:51:44.539126   58571 logs.go:276] 0 containers: []
	W0802 18:51:44.539138   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:44.539146   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:44.539211   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:44.571991   58571 cri.go:89] found id: ""
	I0802 18:51:44.572020   58571 logs.go:276] 0 containers: []
	W0802 18:51:44.572031   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:44.572040   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:44.572100   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:44.605550   58571 cri.go:89] found id: ""
	I0802 18:51:44.605575   58571 logs.go:276] 0 containers: []
	W0802 18:51:44.605583   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:44.605589   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:44.605642   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:44.639465   58571 cri.go:89] found id: ""
	I0802 18:51:44.639490   58571 logs.go:276] 0 containers: []
	W0802 18:51:44.639498   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:44.639508   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:44.639567   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:44.672914   58571 cri.go:89] found id: ""
	I0802 18:51:44.672943   58571 logs.go:276] 0 containers: []
	W0802 18:51:44.672955   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:44.672970   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:44.673034   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:44.706363   58571 cri.go:89] found id: ""
	I0802 18:51:44.706387   58571 logs.go:276] 0 containers: []
	W0802 18:51:44.706395   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:44.706402   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:44.706462   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:44.739029   58571 cri.go:89] found id: ""
	I0802 18:51:44.739057   58571 logs.go:276] 0 containers: []
	W0802 18:51:44.739069   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:44.739077   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:44.739158   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:44.771160   58571 cri.go:89] found id: ""
	I0802 18:51:44.771183   58571 logs.go:276] 0 containers: []
	W0802 18:51:44.771191   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:44.771201   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:44.771216   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:44.808785   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:44.808809   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:44.856903   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:44.856938   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:44.870290   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:44.870325   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:44.940307   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:44.940326   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:44.940337   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:47.518897   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:47.533223   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:47.533305   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:47.566411   58571 cri.go:89] found id: ""
	I0802 18:51:47.566438   58571 logs.go:276] 0 containers: []
	W0802 18:51:47.566452   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:47.566459   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:47.566522   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:47.600603   58571 cri.go:89] found id: ""
	I0802 18:51:47.600628   58571 logs.go:276] 0 containers: []
	W0802 18:51:47.600637   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:47.600645   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:47.600699   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:47.633061   58571 cri.go:89] found id: ""
	I0802 18:51:47.633083   58571 logs.go:276] 0 containers: []
	W0802 18:51:47.633091   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:47.633096   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:47.633147   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:47.666822   58571 cri.go:89] found id: ""
	I0802 18:51:47.666854   58571 logs.go:276] 0 containers: []
	W0802 18:51:47.666866   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:47.666876   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:47.666944   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:47.699484   58571 cri.go:89] found id: ""
	I0802 18:51:47.699504   58571 logs.go:276] 0 containers: []
	W0802 18:51:47.699512   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:47.699518   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:47.699563   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:47.737454   58571 cri.go:89] found id: ""
	I0802 18:51:47.737475   58571 logs.go:276] 0 containers: []
	W0802 18:51:47.737483   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:47.737491   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:47.737573   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:47.769898   58571 cri.go:89] found id: ""
	I0802 18:51:47.769920   58571 logs.go:276] 0 containers: []
	W0802 18:51:47.769929   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:47.769936   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:47.769999   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:47.802186   58571 cri.go:89] found id: ""
	I0802 18:51:47.802211   58571 logs.go:276] 0 containers: []
	W0802 18:51:47.802220   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:47.802231   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:47.802245   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:47.851189   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:47.851225   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:47.864659   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:47.864690   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:47.927733   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:47.927756   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:47.927773   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:48.002490   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:48.002534   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:50.539631   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:50.553324   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:50.553405   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:50.585924   58571 cri.go:89] found id: ""
	I0802 18:51:50.585951   58571 logs.go:276] 0 containers: []
	W0802 18:51:50.585960   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:50.585967   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:50.586037   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:50.621832   58571 cri.go:89] found id: ""
	I0802 18:51:50.621855   58571 logs.go:276] 0 containers: []
	W0802 18:51:50.621862   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:50.621867   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:50.621926   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:50.654125   58571 cri.go:89] found id: ""
	I0802 18:51:50.654149   58571 logs.go:276] 0 containers: []
	W0802 18:51:50.654155   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:50.654161   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:50.654220   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:50.687481   58571 cri.go:89] found id: ""
	I0802 18:51:50.687516   58571 logs.go:276] 0 containers: []
	W0802 18:51:50.687527   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:50.687537   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:50.687600   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:50.717546   58571 cri.go:89] found id: ""
	I0802 18:51:50.717568   58571 logs.go:276] 0 containers: []
	W0802 18:51:50.717577   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:50.717584   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:50.717659   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:50.748851   58571 cri.go:89] found id: ""
	I0802 18:51:50.748881   58571 logs.go:276] 0 containers: []
	W0802 18:51:50.748892   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:50.748900   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:50.748965   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:50.782472   58571 cri.go:89] found id: ""
	I0802 18:51:50.782497   58571 logs.go:276] 0 containers: []
	W0802 18:51:50.782507   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:50.782514   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:50.782573   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:50.818442   58571 cri.go:89] found id: ""
	I0802 18:51:50.818467   58571 logs.go:276] 0 containers: []
	W0802 18:51:50.818477   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:50.818489   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:50.818505   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:50.866597   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:50.866630   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:50.882246   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:50.882283   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:50.951707   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:50.951734   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:50.951752   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:51.031923   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:51.031965   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:53.573633   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:53.586230   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:53.586313   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:53.617223   58571 cri.go:89] found id: ""
	I0802 18:51:53.617253   58571 logs.go:276] 0 containers: []
	W0802 18:51:53.617264   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:53.617273   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:53.617325   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:53.648671   58571 cri.go:89] found id: ""
	I0802 18:51:53.648701   58571 logs.go:276] 0 containers: []
	W0802 18:51:53.648708   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:53.648714   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:53.648763   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:53.681659   58571 cri.go:89] found id: ""
	I0802 18:51:53.681687   58571 logs.go:276] 0 containers: []
	W0802 18:51:53.681697   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:53.681704   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:53.681767   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:53.715375   58571 cri.go:89] found id: ""
	I0802 18:51:53.715399   58571 logs.go:276] 0 containers: []
	W0802 18:51:53.715409   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:53.715417   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:53.715503   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:53.750105   58571 cri.go:89] found id: ""
	I0802 18:51:53.750132   58571 logs.go:276] 0 containers: []
	W0802 18:51:53.750142   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:53.750149   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:53.750220   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:53.784531   58571 cri.go:89] found id: ""
	I0802 18:51:53.784557   58571 logs.go:276] 0 containers: []
	W0802 18:51:53.784567   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:53.784574   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:53.784636   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:53.820681   58571 cri.go:89] found id: ""
	I0802 18:51:53.820705   58571 logs.go:276] 0 containers: []
	W0802 18:51:53.820712   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:53.820724   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:53.820778   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:53.853687   58571 cri.go:89] found id: ""
	I0802 18:51:53.853711   58571 logs.go:276] 0 containers: []
	W0802 18:51:53.853719   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:53.853730   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:53.853748   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:53.903805   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:53.903840   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:53.916861   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:53.916890   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:53.980934   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:53.980958   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:53.980974   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:54.056965   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:54.057002   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:56.593184   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:56.606701   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:56.606775   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:56.638423   58571 cri.go:89] found id: ""
	I0802 18:51:56.638451   58571 logs.go:276] 0 containers: []
	W0802 18:51:56.638462   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:56.638471   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:56.638535   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:56.671716   58571 cri.go:89] found id: ""
	I0802 18:51:56.671747   58571 logs.go:276] 0 containers: []
	W0802 18:51:56.671759   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:56.671766   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:56.671838   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:56.705237   58571 cri.go:89] found id: ""
	I0802 18:51:56.705274   58571 logs.go:276] 0 containers: []
	W0802 18:51:56.705291   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:56.705297   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:56.705363   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:56.736454   58571 cri.go:89] found id: ""
	I0802 18:51:56.736482   58571 logs.go:276] 0 containers: []
	W0802 18:51:56.736493   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:56.736501   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:56.736572   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:56.769491   58571 cri.go:89] found id: ""
	I0802 18:51:56.769512   58571 logs.go:276] 0 containers: []
	W0802 18:51:56.769521   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:56.769528   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:56.769596   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:56.802195   58571 cri.go:89] found id: ""
	I0802 18:51:56.802217   58571 logs.go:276] 0 containers: []
	W0802 18:51:56.802227   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:56.802234   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:56.802294   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:56.836785   58571 cri.go:89] found id: ""
	I0802 18:51:56.836809   58571 logs.go:276] 0 containers: []
	W0802 18:51:56.836816   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:56.836828   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:56.836887   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:56.870658   58571 cri.go:89] found id: ""
	I0802 18:51:56.870686   58571 logs.go:276] 0 containers: []
	W0802 18:51:56.870697   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:56.870709   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:51:56.870724   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:51:56.924796   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:51:56.924834   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:51:56.938791   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:56.938817   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:57.005253   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:57.005280   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:57.005306   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:51:57.081638   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:51:57.081671   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:51:59.618658   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:51:59.631031   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:51:59.631096   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:51:59.662312   58571 cri.go:89] found id: ""
	I0802 18:51:59.662337   58571 logs.go:276] 0 containers: []
	W0802 18:51:59.662347   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:51:59.662355   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:51:59.662417   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:51:59.700491   58571 cri.go:89] found id: ""
	I0802 18:51:59.700515   58571 logs.go:276] 0 containers: []
	W0802 18:51:59.700524   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:51:59.700532   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:51:59.700598   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:51:59.736044   58571 cri.go:89] found id: ""
	I0802 18:51:59.736073   58571 logs.go:276] 0 containers: []
	W0802 18:51:59.736082   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:51:59.736087   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:51:59.736142   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:51:59.770451   58571 cri.go:89] found id: ""
	I0802 18:51:59.770473   58571 logs.go:276] 0 containers: []
	W0802 18:51:59.770481   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:51:59.770487   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:51:59.770535   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:51:59.806549   58571 cri.go:89] found id: ""
	I0802 18:51:59.806575   58571 logs.go:276] 0 containers: []
	W0802 18:51:59.806583   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:51:59.806589   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:51:59.806649   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:51:59.839419   58571 cri.go:89] found id: ""
	I0802 18:51:59.839439   58571 logs.go:276] 0 containers: []
	W0802 18:51:59.839447   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:51:59.839454   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:51:59.839499   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:51:59.870460   58571 cri.go:89] found id: ""
	I0802 18:51:59.870490   58571 logs.go:276] 0 containers: []
	W0802 18:51:59.870506   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:51:59.870528   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:51:59.870599   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:51:59.902147   58571 cri.go:89] found id: ""
	I0802 18:51:59.902174   58571 logs.go:276] 0 containers: []
	W0802 18:51:59.902187   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:51:59.902201   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:51:59.902217   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:51:59.965273   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:51:59.965295   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:51:59.965312   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:00.048540   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:00.048581   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:00.087687   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:00.087714   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:00.138107   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:00.138141   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:02.652050   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:02.664502   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:02.664560   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:02.696979   58571 cri.go:89] found id: ""
	I0802 18:52:02.697013   58571 logs.go:276] 0 containers: []
	W0802 18:52:02.697026   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:02.697035   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:02.697091   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:02.730544   58571 cri.go:89] found id: ""
	I0802 18:52:02.730571   58571 logs.go:276] 0 containers: []
	W0802 18:52:02.730582   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:02.730595   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:02.730671   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:02.763531   58571 cri.go:89] found id: ""
	I0802 18:52:02.763558   58571 logs.go:276] 0 containers: []
	W0802 18:52:02.763568   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:02.763575   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:02.763638   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:02.799805   58571 cri.go:89] found id: ""
	I0802 18:52:02.799831   58571 logs.go:276] 0 containers: []
	W0802 18:52:02.799841   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:02.799848   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:02.799909   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:02.834886   58571 cri.go:89] found id: ""
	I0802 18:52:02.834927   58571 logs.go:276] 0 containers: []
	W0802 18:52:02.834938   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:02.834953   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:02.835011   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:02.868059   58571 cri.go:89] found id: ""
	I0802 18:52:02.868082   58571 logs.go:276] 0 containers: []
	W0802 18:52:02.868090   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:02.868097   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:02.868157   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:02.900703   58571 cri.go:89] found id: ""
	I0802 18:52:02.900730   58571 logs.go:276] 0 containers: []
	W0802 18:52:02.900739   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:02.900754   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:02.900819   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:02.936013   58571 cri.go:89] found id: ""
	I0802 18:52:02.936040   58571 logs.go:276] 0 containers: []
	W0802 18:52:02.936050   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:02.936062   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:02.936078   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:02.949601   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:02.949637   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:03.024024   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:03.024046   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:03.024061   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:03.102237   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:03.102272   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:03.150103   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:03.150130   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:05.702410   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:05.715495   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:05.715554   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:05.746939   58571 cri.go:89] found id: ""
	I0802 18:52:05.746962   58571 logs.go:276] 0 containers: []
	W0802 18:52:05.746970   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:05.746976   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:05.747022   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:05.782753   58571 cri.go:89] found id: ""
	I0802 18:52:05.782781   58571 logs.go:276] 0 containers: []
	W0802 18:52:05.782791   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:05.782799   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:05.782858   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:05.824108   58571 cri.go:89] found id: ""
	I0802 18:52:05.824129   58571 logs.go:276] 0 containers: []
	W0802 18:52:05.824137   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:05.824143   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:05.824201   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:05.860326   58571 cri.go:89] found id: ""
	I0802 18:52:05.860353   58571 logs.go:276] 0 containers: []
	W0802 18:52:05.860373   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:05.860382   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:05.860441   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:05.892954   58571 cri.go:89] found id: ""
	I0802 18:52:05.892978   58571 logs.go:276] 0 containers: []
	W0802 18:52:05.892986   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:05.892992   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:05.893054   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:05.924030   58571 cri.go:89] found id: ""
	I0802 18:52:05.924053   58571 logs.go:276] 0 containers: []
	W0802 18:52:05.924061   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:05.924068   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:05.924117   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:05.956831   58571 cri.go:89] found id: ""
	I0802 18:52:05.956864   58571 logs.go:276] 0 containers: []
	W0802 18:52:05.956890   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:05.956898   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:05.956964   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:05.990062   58571 cri.go:89] found id: ""
	I0802 18:52:05.990084   58571 logs.go:276] 0 containers: []
	W0802 18:52:05.990094   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:05.990105   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:05.990121   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:06.025496   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:06.025521   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:06.075588   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:06.075623   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:06.089539   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:06.089573   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:06.152454   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:06.152477   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:06.152493   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:08.729457   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:08.742082   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:08.742149   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:08.775837   58571 cri.go:89] found id: ""
	I0802 18:52:08.775860   58571 logs.go:276] 0 containers: []
	W0802 18:52:08.775869   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:08.775876   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:08.775936   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:08.809066   58571 cri.go:89] found id: ""
	I0802 18:52:08.809091   58571 logs.go:276] 0 containers: []
	W0802 18:52:08.809100   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:08.809106   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:08.809153   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:08.840360   58571 cri.go:89] found id: ""
	I0802 18:52:08.840390   58571 logs.go:276] 0 containers: []
	W0802 18:52:08.840398   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:08.840408   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:08.840471   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:08.872411   58571 cri.go:89] found id: ""
	I0802 18:52:08.872438   58571 logs.go:276] 0 containers: []
	W0802 18:52:08.872449   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:08.872456   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:08.872518   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:08.903489   58571 cri.go:89] found id: ""
	I0802 18:52:08.903515   58571 logs.go:276] 0 containers: []
	W0802 18:52:08.903525   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:08.903533   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:08.903599   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:08.940046   58571 cri.go:89] found id: ""
	I0802 18:52:08.940080   58571 logs.go:276] 0 containers: []
	W0802 18:52:08.940091   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:08.940099   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:08.940162   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:08.976868   58571 cri.go:89] found id: ""
	I0802 18:52:08.976897   58571 logs.go:276] 0 containers: []
	W0802 18:52:08.976907   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:08.976915   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:08.976976   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:09.008701   58571 cri.go:89] found id: ""
	I0802 18:52:09.008730   58571 logs.go:276] 0 containers: []
	W0802 18:52:09.008741   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:09.008754   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:09.008770   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:09.020935   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:09.020970   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:09.085253   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:09.085279   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:09.085310   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:09.164015   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:09.164047   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:09.199708   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:09.199741   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:11.751414   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:11.764601   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:11.764680   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:11.797676   58571 cri.go:89] found id: ""
	I0802 18:52:11.797702   58571 logs.go:276] 0 containers: []
	W0802 18:52:11.797712   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:11.797720   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:11.797776   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:11.827991   58571 cri.go:89] found id: ""
	I0802 18:52:11.828017   58571 logs.go:276] 0 containers: []
	W0802 18:52:11.828028   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:11.828035   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:11.828093   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:11.862366   58571 cri.go:89] found id: ""
	I0802 18:52:11.862396   58571 logs.go:276] 0 containers: []
	W0802 18:52:11.862407   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:11.862415   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:11.862478   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:11.898552   58571 cri.go:89] found id: ""
	I0802 18:52:11.898581   58571 logs.go:276] 0 containers: []
	W0802 18:52:11.898591   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:11.898599   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:11.898667   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:11.934808   58571 cri.go:89] found id: ""
	I0802 18:52:11.934835   58571 logs.go:276] 0 containers: []
	W0802 18:52:11.934844   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:11.934851   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:11.934912   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:11.970520   58571 cri.go:89] found id: ""
	I0802 18:52:11.970546   58571 logs.go:276] 0 containers: []
	W0802 18:52:11.970556   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:11.970573   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:11.970649   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:12.006848   58571 cri.go:89] found id: ""
	I0802 18:52:12.006879   58571 logs.go:276] 0 containers: []
	W0802 18:52:12.006890   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:12.006898   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:12.007002   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:12.043587   58571 cri.go:89] found id: ""
	I0802 18:52:12.043631   58571 logs.go:276] 0 containers: []
	W0802 18:52:12.043643   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:12.043655   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:12.043679   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:12.079451   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:12.079476   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:12.131515   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:12.131549   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:12.144435   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:12.144470   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:12.208270   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:12.208296   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:12.208312   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:14.801925   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:14.814762   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:14.814832   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:14.848041   58571 cri.go:89] found id: ""
	I0802 18:52:14.848073   58571 logs.go:276] 0 containers: []
	W0802 18:52:14.848084   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:14.848091   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:14.848156   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:14.880977   58571 cri.go:89] found id: ""
	I0802 18:52:14.881005   58571 logs.go:276] 0 containers: []
	W0802 18:52:14.881016   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:14.881023   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:14.881094   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:14.914187   58571 cri.go:89] found id: ""
	I0802 18:52:14.914218   58571 logs.go:276] 0 containers: []
	W0802 18:52:14.914228   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:14.914235   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:14.914315   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:14.954499   58571 cri.go:89] found id: ""
	I0802 18:52:14.954523   58571 logs.go:276] 0 containers: []
	W0802 18:52:14.954534   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:14.954542   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:14.954608   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:14.989333   58571 cri.go:89] found id: ""
	I0802 18:52:14.989366   58571 logs.go:276] 0 containers: []
	W0802 18:52:14.989378   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:14.989386   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:14.989456   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:15.024055   58571 cri.go:89] found id: ""
	I0802 18:52:15.024082   58571 logs.go:276] 0 containers: []
	W0802 18:52:15.024093   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:15.024101   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:15.024155   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:15.055748   58571 cri.go:89] found id: ""
	I0802 18:52:15.055778   58571 logs.go:276] 0 containers: []
	W0802 18:52:15.055789   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:15.055796   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:15.055863   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:15.087420   58571 cri.go:89] found id: ""
	I0802 18:52:15.087447   58571 logs.go:276] 0 containers: []
	W0802 18:52:15.087457   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:15.087469   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:15.087484   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:15.137458   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:15.137491   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:15.150404   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:15.150431   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:15.212058   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:15.212086   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:15.212102   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:15.286798   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:15.286825   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:17.822074   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:17.834532   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:17.834602   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:17.867773   58571 cri.go:89] found id: ""
	I0802 18:52:17.867798   58571 logs.go:276] 0 containers: []
	W0802 18:52:17.867805   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:17.867813   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:17.867871   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:17.900214   58571 cri.go:89] found id: ""
	I0802 18:52:17.900242   58571 logs.go:276] 0 containers: []
	W0802 18:52:17.900253   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:17.900260   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:17.900312   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:17.935965   58571 cri.go:89] found id: ""
	I0802 18:52:17.935991   58571 logs.go:276] 0 containers: []
	W0802 18:52:17.936001   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:17.936007   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:17.936052   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:17.970308   58571 cri.go:89] found id: ""
	I0802 18:52:17.970335   58571 logs.go:276] 0 containers: []
	W0802 18:52:17.970344   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:17.970350   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:17.970419   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:18.002788   58571 cri.go:89] found id: ""
	I0802 18:52:18.002819   58571 logs.go:276] 0 containers: []
	W0802 18:52:18.002827   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:18.002833   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:18.002883   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:18.037082   58571 cri.go:89] found id: ""
	I0802 18:52:18.037108   58571 logs.go:276] 0 containers: []
	W0802 18:52:18.037118   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:18.037125   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:18.037193   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:18.072919   58571 cri.go:89] found id: ""
	I0802 18:52:18.072950   58571 logs.go:276] 0 containers: []
	W0802 18:52:18.072959   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:18.072966   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:18.073031   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:18.131812   58571 cri.go:89] found id: ""
	I0802 18:52:18.131847   58571 logs.go:276] 0 containers: []
	W0802 18:52:18.131858   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:18.131870   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:18.131885   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:18.146246   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:18.146284   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:18.221231   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:18.221256   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:18.221272   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:18.298564   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:18.298596   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:18.336085   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:18.336118   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:20.884914   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:20.897926   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:20.898005   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:20.933708   58571 cri.go:89] found id: ""
	I0802 18:52:20.933737   58571 logs.go:276] 0 containers: []
	W0802 18:52:20.933749   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:20.933758   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:20.933818   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:20.966674   58571 cri.go:89] found id: ""
	I0802 18:52:20.966705   58571 logs.go:276] 0 containers: []
	W0802 18:52:20.966717   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:20.966724   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:20.966777   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:21.000651   58571 cri.go:89] found id: ""
	I0802 18:52:21.000677   58571 logs.go:276] 0 containers: []
	W0802 18:52:21.000689   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:21.000697   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:21.000763   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:21.033207   58571 cri.go:89] found id: ""
	I0802 18:52:21.033232   58571 logs.go:276] 0 containers: []
	W0802 18:52:21.033240   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:21.033246   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:21.033296   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:21.064133   58571 cri.go:89] found id: ""
	I0802 18:52:21.064161   58571 logs.go:276] 0 containers: []
	W0802 18:52:21.064172   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:21.064184   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:21.064248   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:21.094819   58571 cri.go:89] found id: ""
	I0802 18:52:21.094849   58571 logs.go:276] 0 containers: []
	W0802 18:52:21.094860   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:21.094868   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:21.094928   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:21.126168   58571 cri.go:89] found id: ""
	I0802 18:52:21.126199   58571 logs.go:276] 0 containers: []
	W0802 18:52:21.126208   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:21.126215   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:21.126264   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:21.157769   58571 cri.go:89] found id: ""
	I0802 18:52:21.157794   58571 logs.go:276] 0 containers: []
	W0802 18:52:21.157803   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:21.157812   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:21.157825   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:21.203231   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:21.203260   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:21.216186   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:21.216209   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:21.277416   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:21.277436   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:21.277447   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:21.360214   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:21.360249   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:23.901187   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:23.913740   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:23.913813   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:23.945530   58571 cri.go:89] found id: ""
	I0802 18:52:23.945561   58571 logs.go:276] 0 containers: []
	W0802 18:52:23.945573   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:23.945582   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:23.945650   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:23.979379   58571 cri.go:89] found id: ""
	I0802 18:52:23.979408   58571 logs.go:276] 0 containers: []
	W0802 18:52:23.979419   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:23.979433   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:23.979496   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:24.011468   58571 cri.go:89] found id: ""
	I0802 18:52:24.011488   58571 logs.go:276] 0 containers: []
	W0802 18:52:24.011496   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:24.011502   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:24.011550   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:24.044595   58571 cri.go:89] found id: ""
	I0802 18:52:24.044622   58571 logs.go:276] 0 containers: []
	W0802 18:52:24.044632   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:24.044646   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:24.044710   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:24.077939   58571 cri.go:89] found id: ""
	I0802 18:52:24.077968   58571 logs.go:276] 0 containers: []
	W0802 18:52:24.077981   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:24.077989   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:24.078044   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:24.110713   58571 cri.go:89] found id: ""
	I0802 18:52:24.110740   58571 logs.go:276] 0 containers: []
	W0802 18:52:24.110750   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:24.110758   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:24.110827   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:24.142659   58571 cri.go:89] found id: ""
	I0802 18:52:24.142688   58571 logs.go:276] 0 containers: []
	W0802 18:52:24.142699   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:24.142706   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:24.142763   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:24.174169   58571 cri.go:89] found id: ""
	I0802 18:52:24.174200   58571 logs.go:276] 0 containers: []
	W0802 18:52:24.174211   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:24.174223   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:24.174237   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:24.224437   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:24.224467   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:24.237664   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:24.237692   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:24.312569   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:24.312589   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:24.312600   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:24.392659   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:24.392696   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:26.930158   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:26.942588   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:26.942650   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:26.978721   58571 cri.go:89] found id: ""
	I0802 18:52:26.978743   58571 logs.go:276] 0 containers: []
	W0802 18:52:26.978751   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:26.978756   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:26.978806   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:27.012995   58571 cri.go:89] found id: ""
	I0802 18:52:27.013022   58571 logs.go:276] 0 containers: []
	W0802 18:52:27.013030   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:27.013036   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:27.013084   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:27.044334   58571 cri.go:89] found id: ""
	I0802 18:52:27.044361   58571 logs.go:276] 0 containers: []
	W0802 18:52:27.044371   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:27.044377   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:27.044436   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:27.077298   58571 cri.go:89] found id: ""
	I0802 18:52:27.077325   58571 logs.go:276] 0 containers: []
	W0802 18:52:27.077335   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:27.077342   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:27.077402   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:27.109703   58571 cri.go:89] found id: ""
	I0802 18:52:27.109734   58571 logs.go:276] 0 containers: []
	W0802 18:52:27.109744   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:27.109751   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:27.109815   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:27.140668   58571 cri.go:89] found id: ""
	I0802 18:52:27.140703   58571 logs.go:276] 0 containers: []
	W0802 18:52:27.140714   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:27.140723   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:27.140778   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:27.172648   58571 cri.go:89] found id: ""
	I0802 18:52:27.172680   58571 logs.go:276] 0 containers: []
	W0802 18:52:27.172689   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:27.172695   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:27.172747   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:27.206507   58571 cri.go:89] found id: ""
	I0802 18:52:27.206538   58571 logs.go:276] 0 containers: []
	W0802 18:52:27.206547   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:27.206565   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:27.206581   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:27.260037   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:27.260073   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:27.273473   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:27.273500   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:27.345818   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:27.345837   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:27.345853   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:27.440731   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:27.440767   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:29.979175   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:29.991843   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:29.991905   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:30.029348   58571 cri.go:89] found id: ""
	I0802 18:52:30.029374   58571 logs.go:276] 0 containers: []
	W0802 18:52:30.029382   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:30.029388   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:30.029436   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:30.062337   58571 cri.go:89] found id: ""
	I0802 18:52:30.062362   58571 logs.go:276] 0 containers: []
	W0802 18:52:30.062372   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:30.062380   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:30.062439   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:30.097825   58571 cri.go:89] found id: ""
	I0802 18:52:30.097848   58571 logs.go:276] 0 containers: []
	W0802 18:52:30.097858   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:30.097865   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:30.097928   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:30.130199   58571 cri.go:89] found id: ""
	I0802 18:52:30.130225   58571 logs.go:276] 0 containers: []
	W0802 18:52:30.130236   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:30.130245   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:30.130297   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:30.170595   58571 cri.go:89] found id: ""
	I0802 18:52:30.170621   58571 logs.go:276] 0 containers: []
	W0802 18:52:30.170637   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:30.170645   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:30.170706   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:30.207270   58571 cri.go:89] found id: ""
	I0802 18:52:30.207289   58571 logs.go:276] 0 containers: []
	W0802 18:52:30.207297   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:30.207304   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:30.207364   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:30.240279   58571 cri.go:89] found id: ""
	I0802 18:52:30.240306   58571 logs.go:276] 0 containers: []
	W0802 18:52:30.240317   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:30.240325   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:30.240403   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:30.276113   58571 cri.go:89] found id: ""
	I0802 18:52:30.276142   58571 logs.go:276] 0 containers: []
	W0802 18:52:30.276154   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:30.276167   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:30.276183   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:30.361863   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:30.361890   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:30.361911   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:30.443862   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:30.443903   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:30.483784   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:30.483817   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:30.534561   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:30.534597   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:33.048737   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:33.061099   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:33.061162   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:33.094588   58571 cri.go:89] found id: ""
	I0802 18:52:33.094618   58571 logs.go:276] 0 containers: []
	W0802 18:52:33.094628   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:33.094634   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:33.094696   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:33.126554   58571 cri.go:89] found id: ""
	I0802 18:52:33.126581   58571 logs.go:276] 0 containers: []
	W0802 18:52:33.126593   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:33.126601   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:33.126655   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:33.158131   58571 cri.go:89] found id: ""
	I0802 18:52:33.158157   58571 logs.go:276] 0 containers: []
	W0802 18:52:33.158164   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:33.158170   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:33.158221   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:33.189454   58571 cri.go:89] found id: ""
	I0802 18:52:33.189482   58571 logs.go:276] 0 containers: []
	W0802 18:52:33.189491   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:33.189499   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:33.189554   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:33.221258   58571 cri.go:89] found id: ""
	I0802 18:52:33.221287   58571 logs.go:276] 0 containers: []
	W0802 18:52:33.221297   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:33.221304   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:33.221367   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:33.257701   58571 cri.go:89] found id: ""
	I0802 18:52:33.257726   58571 logs.go:276] 0 containers: []
	W0802 18:52:33.257736   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:33.257742   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:33.257802   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:33.293089   58571 cri.go:89] found id: ""
	I0802 18:52:33.293112   58571 logs.go:276] 0 containers: []
	W0802 18:52:33.293120   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:33.293124   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:33.293182   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:33.325638   58571 cri.go:89] found id: ""
	I0802 18:52:33.325671   58571 logs.go:276] 0 containers: []
	W0802 18:52:33.325683   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:33.325695   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:33.325708   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:33.365830   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:33.365862   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:33.416495   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:33.416531   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:33.429603   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:33.429632   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:33.491553   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:33.491575   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:33.491592   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:36.074051   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:36.087024   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:36.087132   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:36.139927   58571 cri.go:89] found id: ""
	I0802 18:52:36.139958   58571 logs.go:276] 0 containers: []
	W0802 18:52:36.139968   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:36.139976   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:36.140040   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:36.178277   58571 cri.go:89] found id: ""
	I0802 18:52:36.178300   58571 logs.go:276] 0 containers: []
	W0802 18:52:36.178308   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:36.178315   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:36.178375   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:36.213662   58571 cri.go:89] found id: ""
	I0802 18:52:36.213691   58571 logs.go:276] 0 containers: []
	W0802 18:52:36.213702   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:36.213710   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:36.213773   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:36.246434   58571 cri.go:89] found id: ""
	I0802 18:52:36.246455   58571 logs.go:276] 0 containers: []
	W0802 18:52:36.246462   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:36.246468   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:36.246522   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:36.284813   58571 cri.go:89] found id: ""
	I0802 18:52:36.284842   58571 logs.go:276] 0 containers: []
	W0802 18:52:36.284853   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:36.284862   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:36.284924   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:36.321165   58571 cri.go:89] found id: ""
	I0802 18:52:36.321191   58571 logs.go:276] 0 containers: []
	W0802 18:52:36.321202   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:36.321209   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:36.321270   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:36.359042   58571 cri.go:89] found id: ""
	I0802 18:52:36.359079   58571 logs.go:276] 0 containers: []
	W0802 18:52:36.359090   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:36.359096   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:36.359176   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:36.389097   58571 cri.go:89] found id: ""
	I0802 18:52:36.389132   58571 logs.go:276] 0 containers: []
	W0802 18:52:36.389142   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:36.389154   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:36.389170   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:36.424660   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:36.424684   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:36.473113   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:36.473146   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:36.486521   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:36.486550   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:36.552013   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:36.552038   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:36.552053   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:39.126725   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:39.139025   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:39.139091   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:39.169536   58571 cri.go:89] found id: ""
	I0802 18:52:39.169567   58571 logs.go:276] 0 containers: []
	W0802 18:52:39.169574   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:39.169580   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:39.169637   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:39.202124   58571 cri.go:89] found id: ""
	I0802 18:52:39.202147   58571 logs.go:276] 0 containers: []
	W0802 18:52:39.202162   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:39.202170   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:39.202230   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:39.234166   58571 cri.go:89] found id: ""
	I0802 18:52:39.234192   58571 logs.go:276] 0 containers: []
	W0802 18:52:39.234202   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:39.234209   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:39.234272   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:39.265996   58571 cri.go:89] found id: ""
	I0802 18:52:39.266019   58571 logs.go:276] 0 containers: []
	W0802 18:52:39.266028   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:39.266034   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:39.266096   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:39.299560   58571 cri.go:89] found id: ""
	I0802 18:52:39.299582   58571 logs.go:276] 0 containers: []
	W0802 18:52:39.299593   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:39.299601   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:39.299656   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:39.332332   58571 cri.go:89] found id: ""
	I0802 18:52:39.332359   58571 logs.go:276] 0 containers: []
	W0802 18:52:39.332371   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:39.332379   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:39.332438   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:39.363235   58571 cri.go:89] found id: ""
	I0802 18:52:39.363264   58571 logs.go:276] 0 containers: []
	W0802 18:52:39.363274   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:39.363282   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:39.363346   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:39.394846   58571 cri.go:89] found id: ""
	I0802 18:52:39.394870   58571 logs.go:276] 0 containers: []
	W0802 18:52:39.394880   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:39.394895   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:39.394910   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:39.459196   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:39.459222   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:39.459240   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:39.536689   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:39.536724   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:39.572160   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:39.572192   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:39.623262   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:39.623293   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:42.136268   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:42.148678   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:42.148750   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:42.178314   58571 cri.go:89] found id: ""
	I0802 18:52:42.178345   58571 logs.go:276] 0 containers: []
	W0802 18:52:42.178356   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:42.178370   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:42.178434   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:42.209074   58571 cri.go:89] found id: ""
	I0802 18:52:42.209098   58571 logs.go:276] 0 containers: []
	W0802 18:52:42.209106   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:42.209111   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:42.209162   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:42.240457   58571 cri.go:89] found id: ""
	I0802 18:52:42.240490   58571 logs.go:276] 0 containers: []
	W0802 18:52:42.240502   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:42.240511   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:42.240584   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:42.277197   58571 cri.go:89] found id: ""
	I0802 18:52:42.277223   58571 logs.go:276] 0 containers: []
	W0802 18:52:42.277231   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:42.277237   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:42.277300   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:42.316594   58571 cri.go:89] found id: ""
	I0802 18:52:42.316614   58571 logs.go:276] 0 containers: []
	W0802 18:52:42.316622   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:42.316628   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:42.316684   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:42.347496   58571 cri.go:89] found id: ""
	I0802 18:52:42.347524   58571 logs.go:276] 0 containers: []
	W0802 18:52:42.347534   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:42.347542   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:42.347603   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:42.379665   58571 cri.go:89] found id: ""
	I0802 18:52:42.379695   58571 logs.go:276] 0 containers: []
	W0802 18:52:42.379704   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:42.379710   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:42.379760   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:42.411752   58571 cri.go:89] found id: ""
	I0802 18:52:42.411775   58571 logs.go:276] 0 containers: []
	W0802 18:52:42.411783   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:42.411791   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:42.411802   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:42.463791   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:42.463826   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:42.476580   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:42.476605   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:42.542042   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:42.542067   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:42.542081   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:42.616266   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:42.616303   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:45.153349   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:45.165861   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:45.165935   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:45.201899   58571 cri.go:89] found id: ""
	I0802 18:52:45.201928   58571 logs.go:276] 0 containers: []
	W0802 18:52:45.201936   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:45.201942   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:45.201991   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:45.237492   58571 cri.go:89] found id: ""
	I0802 18:52:45.237524   58571 logs.go:276] 0 containers: []
	W0802 18:52:45.237537   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:45.237545   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:45.237618   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:45.272874   58571 cri.go:89] found id: ""
	I0802 18:52:45.272901   58571 logs.go:276] 0 containers: []
	W0802 18:52:45.272909   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:45.272915   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:45.272963   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:45.308599   58571 cri.go:89] found id: ""
	I0802 18:52:45.308624   58571 logs.go:276] 0 containers: []
	W0802 18:52:45.308633   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:45.308639   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:45.308704   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:45.345059   58571 cri.go:89] found id: ""
	I0802 18:52:45.345085   58571 logs.go:276] 0 containers: []
	W0802 18:52:45.345094   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:45.345100   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:45.345170   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:45.379469   58571 cri.go:89] found id: ""
	I0802 18:52:45.379494   58571 logs.go:276] 0 containers: []
	W0802 18:52:45.379505   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:45.379513   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:45.379592   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:45.409316   58571 cri.go:89] found id: ""
	I0802 18:52:45.409338   58571 logs.go:276] 0 containers: []
	W0802 18:52:45.409345   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:45.409351   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:45.409397   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:45.439532   58571 cri.go:89] found id: ""
	I0802 18:52:45.439556   58571 logs.go:276] 0 containers: []
	W0802 18:52:45.439563   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:45.439573   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:45.439586   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:45.505905   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:45.505925   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:45.505942   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:45.578359   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:45.578395   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:45.613850   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:45.613877   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:45.663554   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:45.663585   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:48.176505   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:48.189159   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:48.189219   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:48.224745   58571 cri.go:89] found id: ""
	I0802 18:52:48.224773   58571 logs.go:276] 0 containers: []
	W0802 18:52:48.224784   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:48.224792   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:48.224853   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:48.259426   58571 cri.go:89] found id: ""
	I0802 18:52:48.259455   58571 logs.go:276] 0 containers: []
	W0802 18:52:48.259464   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:48.259470   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:48.259525   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:48.294674   58571 cri.go:89] found id: ""
	I0802 18:52:48.294719   58571 logs.go:276] 0 containers: []
	W0802 18:52:48.294728   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:48.294734   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:48.294778   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:48.329829   58571 cri.go:89] found id: ""
	I0802 18:52:48.329857   58571 logs.go:276] 0 containers: []
	W0802 18:52:48.329868   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:48.329876   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:48.329937   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:48.364360   58571 cri.go:89] found id: ""
	I0802 18:52:48.364385   58571 logs.go:276] 0 containers: []
	W0802 18:52:48.364398   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:48.364405   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:48.364516   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:48.399548   58571 cri.go:89] found id: ""
	I0802 18:52:48.399579   58571 logs.go:276] 0 containers: []
	W0802 18:52:48.399591   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:48.399598   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:48.399663   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:48.435927   58571 cri.go:89] found id: ""
	I0802 18:52:48.435954   58571 logs.go:276] 0 containers: []
	W0802 18:52:48.435965   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:48.435972   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:48.436043   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:48.470706   58571 cri.go:89] found id: ""
	I0802 18:52:48.470732   58571 logs.go:276] 0 containers: []
	W0802 18:52:48.470744   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:48.470755   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:48.470767   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:48.505495   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:48.505526   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:48.556277   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:48.556316   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:48.569123   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:48.569151   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:48.637367   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:48.637386   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:48.637397   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:51.213570   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:51.225362   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:51.225423   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:51.257895   58571 cri.go:89] found id: ""
	I0802 18:52:51.257922   58571 logs.go:276] 0 containers: []
	W0802 18:52:51.257931   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:51.257942   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:51.258006   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:51.291400   58571 cri.go:89] found id: ""
	I0802 18:52:51.291427   58571 logs.go:276] 0 containers: []
	W0802 18:52:51.291436   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:51.291443   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:51.291509   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:51.328686   58571 cri.go:89] found id: ""
	I0802 18:52:51.328709   58571 logs.go:276] 0 containers: []
	W0802 18:52:51.328720   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:51.328727   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:51.328776   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:51.359042   58571 cri.go:89] found id: ""
	I0802 18:52:51.359073   58571 logs.go:276] 0 containers: []
	W0802 18:52:51.359084   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:51.359092   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:51.359167   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:51.389727   58571 cri.go:89] found id: ""
	I0802 18:52:51.389751   58571 logs.go:276] 0 containers: []
	W0802 18:52:51.389759   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:51.389765   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:51.389817   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:51.423204   58571 cri.go:89] found id: ""
	I0802 18:52:51.423233   58571 logs.go:276] 0 containers: []
	W0802 18:52:51.423245   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:51.423253   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:51.423314   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:51.454967   58571 cri.go:89] found id: ""
	I0802 18:52:51.454998   58571 logs.go:276] 0 containers: []
	W0802 18:52:51.455010   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:51.455018   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:51.455072   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:51.487674   58571 cri.go:89] found id: ""
	I0802 18:52:51.487698   58571 logs.go:276] 0 containers: []
	W0802 18:52:51.487706   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:51.487715   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:51.487731   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:51.536896   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:51.536932   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:51.549690   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:51.549712   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:51.616231   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:51.616303   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:51.616316   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:51.692455   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:51.692487   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:54.231227   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:54.243571   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:52:54.243633   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:52:54.275170   58571 cri.go:89] found id: ""
	I0802 18:52:54.275199   58571 logs.go:276] 0 containers: []
	W0802 18:52:54.275208   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:52:54.275214   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:52:54.275266   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:52:54.305839   58571 cri.go:89] found id: ""
	I0802 18:52:54.305866   58571 logs.go:276] 0 containers: []
	W0802 18:52:54.305877   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:52:54.305885   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:52:54.305951   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:52:54.337689   58571 cri.go:89] found id: ""
	I0802 18:52:54.337716   58571 logs.go:276] 0 containers: []
	W0802 18:52:54.337727   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:52:54.337735   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:52:54.337793   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:52:54.368203   58571 cri.go:89] found id: ""
	I0802 18:52:54.368234   58571 logs.go:276] 0 containers: []
	W0802 18:52:54.368243   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:52:54.368249   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:52:54.368305   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:52:54.399905   58571 cri.go:89] found id: ""
	I0802 18:52:54.399933   58571 logs.go:276] 0 containers: []
	W0802 18:52:54.399944   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:52:54.399952   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:52:54.400019   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:52:54.430248   58571 cri.go:89] found id: ""
	I0802 18:52:54.430275   58571 logs.go:276] 0 containers: []
	W0802 18:52:54.430287   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:52:54.430300   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:52:54.430364   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:52:54.463778   58571 cri.go:89] found id: ""
	I0802 18:52:54.463805   58571 logs.go:276] 0 containers: []
	W0802 18:52:54.463816   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:52:54.463824   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:52:54.463889   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:52:54.501223   58571 cri.go:89] found id: ""
	I0802 18:52:54.501244   58571 logs.go:276] 0 containers: []
	W0802 18:52:54.501252   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:52:54.501261   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:52:54.501272   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:52:54.568636   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0802 18:52:54.568656   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:52:54.568668   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:52:54.657641   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:52:54.657683   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:52:54.698254   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:52:54.698294   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:52:54.750367   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:52:54.750412   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:52:57.263866   58571 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:52:57.276178   58571 kubeadm.go:597] duration metric: took 4m1.654232112s to restartPrimaryControlPlane
	W0802 18:52:57.276249   58571 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0802 18:52:57.276277   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0802 18:52:58.389241   58571 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.112829321s)
	I0802 18:52:58.389333   58571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:52:58.403702   58571 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 18:52:58.413148   58571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:52:58.422468   58571 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:52:58.422483   58571 kubeadm.go:157] found existing configuration files:
	
	I0802 18:52:58.422524   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:52:58.431012   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:52:58.431066   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:52:58.439706   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:52:58.447794   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:52:58.447842   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:52:58.456381   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:52:58.465085   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:52:58.465129   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:52:58.473615   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:52:58.483072   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:52:58.483137   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:52:58.491641   58571 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 18:52:58.555779   58571 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0802 18:52:58.555846   58571 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:52:58.690719   58571 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:52:58.690865   58571 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:52:58.691019   58571 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:52:58.853385   58571 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:52:58.856100   58571 out.go:204]   - Generating certificates and keys ...
	I0802 18:52:58.856206   58571 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:52:58.856310   58571 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:52:58.856413   58571 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 18:52:58.856505   58571 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 18:52:58.856617   58571 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 18:52:58.856717   58571 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 18:52:58.856810   58571 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 18:52:58.856972   58571 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 18:52:58.857092   58571 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 18:52:58.857204   58571 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 18:52:58.857274   58571 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 18:52:58.857376   58571 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:52:59.074674   58571 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:52:59.175632   58571 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:52:59.400578   58571 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:52:59.517182   58571 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:52:59.530764   58571 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:52:59.532535   58571 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:52:59.532750   58571 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:52:59.667293   58571 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:52:59.668966   58571 out.go:204]   - Booting up control plane ...
	I0802 18:52:59.669067   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:52:59.680464   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:52:59.681786   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:52:59.682973   58571 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:52:59.685910   58571 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 18:53:39.686429   58571 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:53:39.687159   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:53:39.687340   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:53:44.687662   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:53:44.687895   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:53:54.688448   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:53:54.688683   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:54:14.689595   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:54:14.689794   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:54:54.692223   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:54:54.692892   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:54:54.692929   58571 kubeadm.go:310] 
	I0802 18:54:54.693024   58571 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0802 18:54:54.693121   58571 kubeadm.go:310] 		timed out waiting for the condition
	I0802 18:54:54.693144   58571 kubeadm.go:310] 
	I0802 18:54:54.693238   58571 kubeadm.go:310] 	This error is likely caused by:
	I0802 18:54:54.693339   58571 kubeadm.go:310] 		- The kubelet is not running
	I0802 18:54:54.693578   58571 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0802 18:54:54.693590   58571 kubeadm.go:310] 
	I0802 18:54:54.693857   58571 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0802 18:54:54.693953   58571 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0802 18:54:54.694023   58571 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0802 18:54:54.694046   58571 kubeadm.go:310] 
	I0802 18:54:54.694314   58571 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0802 18:54:54.694531   58571 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0802 18:54:54.694543   58571 kubeadm.go:310] 
	I0802 18:54:54.694802   58571 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0802 18:54:54.694986   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0802 18:54:54.695094   58571 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0802 18:54:54.695330   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0802 18:54:54.695354   58571 kubeadm.go:310] 
	I0802 18:54:54.695497   58571 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 18:54:54.695640   58571 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0802 18:54:54.695935   58571 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0802 18:54:54.696059   58571 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0802 18:54:54.696101   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0802 18:55:00.150287   58571 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.454164458s)
	I0802 18:55:00.150364   58571 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:55:00.164288   58571 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 18:55:00.173246   58571 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 18:55:00.173264   58571 kubeadm.go:157] found existing configuration files:
	
	I0802 18:55:00.173331   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 18:55:00.181838   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 18:55:00.181895   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 18:55:00.191723   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 18:55:00.201558   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 18:55:00.201629   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 18:55:00.211894   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 18:55:00.221693   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 18:55:00.221755   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 18:55:00.231761   58571 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 18:55:00.241368   58571 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 18:55:00.241427   58571 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 18:55:00.249898   58571 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 18:55:00.449754   58571 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 18:56:56.424677   58571 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0802 18:56:56.424763   58571 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0802 18:56:56.426349   58571 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0802 18:56:56.426400   58571 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:56:56.426486   58571 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:56:56.426574   58571 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:56:56.426653   58571 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:56:56.426705   58571 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:56:56.428652   58571 out.go:204]   - Generating certificates and keys ...
	I0802 18:56:56.428741   58571 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:56:56.428809   58571 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:56:56.428898   58571 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 18:56:56.428972   58571 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 18:56:56.429041   58571 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 18:56:56.429089   58571 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 18:56:56.429161   58571 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 18:56:56.429218   58571 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 18:56:56.429298   58571 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 18:56:56.429380   58571 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 18:56:56.429416   58571 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 18:56:56.429492   58571 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:56:56.429535   58571 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:56:56.429590   58571 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:56:56.429676   58571 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:56:56.429736   58571 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:56:56.429821   58571 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:56:56.429890   58571 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:56:56.429950   58571 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:56:56.430038   58571 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:56:56.431432   58571 out.go:204]   - Booting up control plane ...
	I0802 18:56:56.431529   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:56:56.431650   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:56:56.431737   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:56:56.431820   58571 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:56:56.432000   58571 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 18:56:56.432070   58571 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:56:56.432142   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432320   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432400   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432555   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432625   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432805   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432899   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433090   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433160   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433309   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433316   58571 kubeadm.go:310] 
	I0802 18:56:56.433357   58571 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0802 18:56:56.433389   58571 kubeadm.go:310] 		timed out waiting for the condition
	I0802 18:56:56.433395   58571 kubeadm.go:310] 
	I0802 18:56:56.433430   58571 kubeadm.go:310] 	This error is likely caused by:
	I0802 18:56:56.433471   58571 kubeadm.go:310] 		- The kubelet is not running
	I0802 18:56:56.433602   58571 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0802 18:56:56.433617   58571 kubeadm.go:310] 
	I0802 18:56:56.433748   58571 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0802 18:56:56.433805   58571 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0802 18:56:56.433854   58571 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0802 18:56:56.433863   58571 kubeadm.go:310] 
	I0802 18:56:56.433949   58571 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0802 18:56:56.434017   58571 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0802 18:56:56.434023   58571 kubeadm.go:310] 
	I0802 18:56:56.434150   58571 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0802 18:56:56.434225   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0802 18:56:56.434317   58571 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0802 18:56:56.434408   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0802 18:56:56.434422   58571 kubeadm.go:310] 
	I0802 18:56:56.434487   58571 kubeadm.go:394] duration metric: took 8m0.865897602s to StartCluster
	I0802 18:56:56.434534   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:56:56.434606   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:56:56.480531   58571 cri.go:89] found id: ""
	I0802 18:56:56.480556   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.480564   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:56:56.480570   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:56:56.480622   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:56:56.524218   58571 cri.go:89] found id: ""
	I0802 18:56:56.524249   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.524258   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:56:56.524264   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:56:56.524318   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:56:56.563951   58571 cri.go:89] found id: ""
	I0802 18:56:56.563977   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.563984   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:56:56.563990   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:56:56.564046   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:56:56.600511   58571 cri.go:89] found id: ""
	I0802 18:56:56.600533   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.600540   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:56:56.600545   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:56:56.600607   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:56:56.634000   58571 cri.go:89] found id: ""
	I0802 18:56:56.634024   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.634032   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:56:56.634038   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:56:56.634088   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:56:56.667317   58571 cri.go:89] found id: ""
	I0802 18:56:56.667345   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.667356   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:56:56.667364   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:56:56.667429   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:56:56.698619   58571 cri.go:89] found id: ""
	I0802 18:56:56.698646   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.698656   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:56:56.698664   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:56:56.698726   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:56:56.730196   58571 cri.go:89] found id: ""
	I0802 18:56:56.730222   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.730239   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:56:56.730253   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:56:56.730267   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:56:56.837916   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:56:56.837958   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:56:56.881210   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:56:56.881242   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:56:56.930673   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:56:56.930712   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:56:56.944039   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:56:56.944072   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:56:57.026441   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0802 18:56:57.026505   58571 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0802 18:56:57.026546   58571 out.go:239] * 
	* 
	W0802 18:56:57.026632   58571 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.026667   58571 out.go:239] * 
	* 
	W0802 18:56:57.027538   58571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:56:57.031093   58571 out.go:177] 
	W0802 18:56:57.032235   58571 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.032305   58571 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0802 18:56:57.032328   58571 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0802 18:56:57.033757   58571 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-490984 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 2 (212.400141ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-490984 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-490984        | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407306                  | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490984             | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504903       | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:53 UTC |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:45 UTC |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-198962             | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-198962                  | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-198962 image list                           | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-684611 | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | disable-driver-mounts-684611                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-757654            | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC | 02 Aug 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-757654                 | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:55:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:55:07.300822   63271 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:55:07.301073   63271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:55:07.301083   63271 out.go:304] Setting ErrFile to fd 2...
	I0802 18:55:07.301087   63271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:55:07.301311   63271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:55:07.301870   63271 out.go:298] Setting JSON to false
	I0802 18:55:07.302787   63271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5851,"bootTime":1722619056,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:55:07.302842   63271 start.go:139] virtualization: kvm guest
	I0802 18:55:07.305206   63271 out.go:177] * [embed-certs-757654] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:55:07.306647   63271 notify.go:220] Checking for updates...
	I0802 18:55:07.306680   63271 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:55:07.308191   63271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:55:07.309618   63271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:55:07.310900   63271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:55:07.312292   63271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:55:07.313676   63271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:55:07.315371   63271 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:55:07.315804   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.315868   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.330686   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0802 18:55:07.331071   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.331554   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.331573   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.331865   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.332028   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.332279   63271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:55:07.332554   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.332586   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.348583   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0802 18:55:07.349036   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.349454   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.349479   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.349841   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.350094   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.386562   63271 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:55:07.387914   63271 start.go:297] selected driver: kvm2
	I0802 18:55:07.387927   63271 start.go:901] validating driver "kvm2" against &{Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:55:07.388032   63271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:55:07.388727   63271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:55:07.388793   63271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:55:07.403061   63271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:55:07.403460   63271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:55:07.403517   63271 cni.go:84] Creating CNI manager for ""
	I0802 18:55:07.403530   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:55:07.403564   63271 start.go:340] cluster config:
	{Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:55:07.403666   63271 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:55:07.405667   63271 out.go:177] * Starting "embed-certs-757654" primary control-plane node in "embed-certs-757654" cluster
	I0802 18:55:07.406842   63271 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:55:07.406881   63271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 18:55:07.406891   63271 cache.go:56] Caching tarball of preloaded images
	I0802 18:55:07.406977   63271 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:55:07.406989   63271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 18:55:07.407139   63271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/config.json ...
	I0802 18:55:07.407354   63271 start.go:360] acquireMachinesLock for embed-certs-757654: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:55:07.407402   63271 start.go:364] duration metric: took 27.558µs to acquireMachinesLock for "embed-certs-757654"
	I0802 18:55:07.407419   63271 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:55:07.407426   63271 fix.go:54] fixHost starting: 
	I0802 18:55:07.407713   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.407759   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.421857   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0802 18:55:07.422321   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.422811   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.422834   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.423160   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.423321   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.423495   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 18:55:07.424925   63271 fix.go:112] recreateIfNeeded on embed-certs-757654: state=Running err=<nil>
	W0802 18:55:07.424950   63271 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:55:07.427128   63271 out.go:177] * Updating the running kvm2 "embed-certs-757654" VM ...
	I0802 18:55:07.428434   63271 machine.go:94] provisionDockerMachine start ...
	I0802 18:55:07.428462   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.428711   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 18:55:07.431558   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:55:07.432004   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 19:51:03 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 18:55:07.432035   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:55:07.432207   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 18:55:07.432412   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 18:55:07.432600   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 18:55:07.432774   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 18:55:07.432921   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 18:55:07.433139   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 18:55:07.433153   63271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:55:10.331372   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:13.403378   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:19.483421   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:22.555412   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:28.635392   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:31.711303   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:40.827373   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:43.899432   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:49.979406   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:53.051366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:59.131387   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:02.203356   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:08.283365   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:11.355399   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:17.435474   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:20.507366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:26.587339   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:29.659353   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:35.739335   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:38.811375   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:44.891395   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:47.963426   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:56.424677   58571 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0802 18:56:56.424763   58571 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0802 18:56:56.426349   58571 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0802 18:56:56.426400   58571 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:56:56.426486   58571 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:56:56.426574   58571 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:56:56.426653   58571 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:56:56.426705   58571 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:56:56.428652   58571 out.go:204]   - Generating certificates and keys ...
	I0802 18:56:56.428741   58571 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:56:56.428809   58571 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:56:56.428898   58571 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 18:56:56.428972   58571 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 18:56:56.429041   58571 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 18:56:56.429089   58571 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 18:56:56.429161   58571 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 18:56:56.429218   58571 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 18:56:56.429298   58571 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 18:56:56.429380   58571 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 18:56:56.429416   58571 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 18:56:56.429492   58571 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:56:56.429535   58571 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:56:56.429590   58571 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:56:56.429676   58571 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:56:56.429736   58571 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:56:56.429821   58571 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:56:56.429890   58571 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:56:56.429950   58571 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:56:56.430038   58571 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:56:56.431432   58571 out.go:204]   - Booting up control plane ...
	I0802 18:56:56.431529   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:56:56.431650   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:56:56.431737   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:56:56.431820   58571 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:56:56.432000   58571 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 18:56:56.432070   58571 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:56:56.432142   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432320   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432400   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432555   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432625   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432805   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432899   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433090   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433160   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433309   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433316   58571 kubeadm.go:310] 
	I0802 18:56:56.433357   58571 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0802 18:56:56.433389   58571 kubeadm.go:310] 		timed out waiting for the condition
	I0802 18:56:56.433395   58571 kubeadm.go:310] 
	I0802 18:56:56.433430   58571 kubeadm.go:310] 	This error is likely caused by:
	I0802 18:56:56.433471   58571 kubeadm.go:310] 		- The kubelet is not running
	I0802 18:56:56.433602   58571 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0802 18:56:56.433617   58571 kubeadm.go:310] 
	I0802 18:56:56.433748   58571 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0802 18:56:56.433805   58571 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0802 18:56:56.433854   58571 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0802 18:56:56.433863   58571 kubeadm.go:310] 
	I0802 18:56:56.433949   58571 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0802 18:56:56.434017   58571 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0802 18:56:56.434023   58571 kubeadm.go:310] 
	I0802 18:56:56.434150   58571 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0802 18:56:56.434225   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0802 18:56:56.434317   58571 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0802 18:56:56.434408   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0802 18:56:56.434422   58571 kubeadm.go:310] 
	I0802 18:56:56.434487   58571 kubeadm.go:394] duration metric: took 8m0.865897602s to StartCluster
	I0802 18:56:56.434534   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:56:56.434606   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:56:56.480531   58571 cri.go:89] found id: ""
	I0802 18:56:56.480556   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.480564   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:56:56.480570   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:56:56.480622   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:56:56.524218   58571 cri.go:89] found id: ""
	I0802 18:56:56.524249   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.524258   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:56:56.524264   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:56:56.524318   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:56:56.563951   58571 cri.go:89] found id: ""
	I0802 18:56:56.563977   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.563984   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:56:56.563990   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:56:56.564046   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:56:56.600511   58571 cri.go:89] found id: ""
	I0802 18:56:56.600533   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.600540   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:56:56.600545   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:56:56.600607   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:56:56.634000   58571 cri.go:89] found id: ""
	I0802 18:56:56.634024   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.634032   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:56:56.634038   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:56:56.634088   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:56:56.667317   58571 cri.go:89] found id: ""
	I0802 18:56:56.667345   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.667356   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:56:56.667364   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:56:56.667429   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:56:56.698619   58571 cri.go:89] found id: ""
	I0802 18:56:56.698646   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.698656   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:56:56.698664   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:56:56.698726   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:56:56.730196   58571 cri.go:89] found id: ""
	I0802 18:56:56.730222   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.730239   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:56:56.730253   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:56:56.730267   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:56:56.837916   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:56:56.837958   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:56:56.881210   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:56:56.881242   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:56:56.930673   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:56:56.930712   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:56:56.944039   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:56:56.944072   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:56:57.026441   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0802 18:56:57.026505   58571 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0802 18:56:57.026546   58571 out.go:239] * 
	W0802 18:56:57.026632   58571 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.026667   58571 out.go:239] * 
	W0802 18:56:57.027538   58571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:56:57.031093   58571 out.go:177] 
	W0802 18:56:57.032235   58571 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.032305   58571 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0802 18:56:57.032328   58571 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0802 18:56:57.033757   58571 out.go:177] 
	I0802 18:56:54.043379   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:57.115474   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	
	
	==> CRI-O <==
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.906986496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625017906967373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d53b7c5-a549-4c6d-b5e5-85d21d163fed name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.907505467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ff5c975-c641-4233-a392-ac8febffaa5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.907570586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ff5c975-c641-4233-a392-ac8febffaa5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.907640771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1ff5c975-c641-4233-a392-ac8febffaa5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.944424156Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b197af65-cd9e-4b05-8062-29041d63557e name=/runtime.v1.RuntimeService/Version
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.944505136Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b197af65-cd9e-4b05-8062-29041d63557e name=/runtime.v1.RuntimeService/Version
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.945660165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=201c9730-fef0-4eca-ace2-30dd59a5975d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.946014234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625017945995607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=201c9730-fef0-4eca-ace2-30dd59a5975d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.946451655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26d99687-b321-4569-9949-c17178e7e90d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.946511218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26d99687-b321-4569-9949-c17178e7e90d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.946549033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=26d99687-b321-4569-9949-c17178e7e90d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.979963545Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3666bf0d-b700-4135-b831-84115b88c335 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.980122017Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3666bf0d-b700-4135-b831-84115b88c335 name=/runtime.v1.RuntimeService/Version
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.981204579Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=949313e7-b6bb-42a3-8dcb-7bbcab66d562 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.981570912Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625017981552008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=949313e7-b6bb-42a3-8dcb-7bbcab66d562 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.982197571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=554d36b4-4e2e-4a57-9ef5-b5c31a3ca8a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.982267940Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=554d36b4-4e2e-4a57-9ef5-b5c31a3ca8a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:56:57 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:57.982301553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=554d36b4-4e2e-4a57-9ef5-b5c31a3ca8a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:56:58 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:58.015396194Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0cbe7b5-e379-4a54-95fa-e12c08ea1c5c name=/runtime.v1.RuntimeService/Version
	Aug 02 18:56:58 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:58.015483063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0cbe7b5-e379-4a54-95fa-e12c08ea1c5c name=/runtime.v1.RuntimeService/Version
	Aug 02 18:56:58 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:58.016550597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6276bbc8-0963-4d3c-bb76-5e4221798c20 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:56:58 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:58.016991915Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625018016970453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6276bbc8-0963-4d3c-bb76-5e4221798c20 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 18:56:58 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:58.017656105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df3d172d-e191-44d7-83ed-aa93581bfb26 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:56:58 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:58.017740174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df3d172d-e191-44d7-83ed-aa93581bfb26 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 18:56:58 old-k8s-version-490984 crio[651]: time="2024-08-02 18:56:58.017781652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=df3d172d-e191-44d7-83ed-aa93581bfb26 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 2 18:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051059] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037584] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.690028] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.750688] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.557853] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.754585] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.059665] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060053] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.196245] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.132013] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.247678] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +5.903520] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.064556] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.958055] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[Aug 2 18:49] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 2 18:52] systemd-fstab-generator[4990]: Ignoring "noauto" option for root device
	[Aug 2 18:55] systemd-fstab-generator[5277]: Ignoring "noauto" option for root device
	[  +0.065921] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:56:58 up 8 min,  0 users,  load average: 0.00, 0.04, 0.01
	Linux old-k8s-version-490984 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:134 +0x191
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]: goroutine 145 [runnable]:
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d02a0, 0xc0001020c0)
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:218
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]: goroutine 124 [select]:
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000943900, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001b9200, 0x0, 0x0)
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00066f180)
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5453]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 02 18:56:56 old-k8s-version-490984 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 02 18:56:56 old-k8s-version-490984 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 02 18:56:56 old-k8s-version-490984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 02 18:56:56 old-k8s-version-490984 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 02 18:56:56 old-k8s-version-490984 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5503]: I0802 18:56:56.813440    5503 server.go:416] Version: v1.20.0
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5503]: I0802 18:56:56.814677    5503 server.go:837] Client rotation is on, will bootstrap in background
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5503]: I0802 18:56:56.818573    5503 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5503]: W0802 18:56:56.819574    5503 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 02 18:56:56 old-k8s-version-490984 kubelet[5503]: I0802 18:56:56.819687    5503 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490984 -n old-k8s-version-490984
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 2 (210.497651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-490984" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (767.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903: exit status 3 (3.16757203s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:44:40.059474   58738 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.183:22: connect: no route to host
	E0802 18:44:40.059494   58738 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.183:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-504903 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-504903 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153734163s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.183:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-504903 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903: exit status 3 (3.062046213s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:44:49.275562   58817 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.183:22: connect: no route to host
	E0802 18:44:49.275582   58817 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.183:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-504903" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
E0802 18:50:14.261034   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
E0802 18:52:43.927305   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
E0802 18:54:06.976147   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
E0802 18:55:14.261818   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306: exit status 2 (224.485507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "no-preload-407306" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306: exit status 2 (210.162863ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-407306 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-490984        | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407306                  | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490984             | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504903       | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:53 UTC |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:45 UTC |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-198962             | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-198962                  | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-198962 image list                           | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-684611 | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | disable-driver-mounts-684611                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-757654            | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC | 02 Aug 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-757654                 | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:55:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:55:07.300822   63271 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:55:07.301073   63271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:55:07.301083   63271 out.go:304] Setting ErrFile to fd 2...
	I0802 18:55:07.301087   63271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:55:07.301311   63271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:55:07.301870   63271 out.go:298] Setting JSON to false
	I0802 18:55:07.302787   63271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5851,"bootTime":1722619056,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:55:07.302842   63271 start.go:139] virtualization: kvm guest
	I0802 18:55:07.305206   63271 out.go:177] * [embed-certs-757654] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:55:07.306647   63271 notify.go:220] Checking for updates...
	I0802 18:55:07.306680   63271 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:55:07.308191   63271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:55:07.309618   63271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:55:07.310900   63271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:55:07.312292   63271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:55:07.313676   63271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:55:07.315371   63271 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:55:07.315804   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.315868   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.330686   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0802 18:55:07.331071   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.331554   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.331573   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.331865   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.332028   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.332279   63271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:55:07.332554   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.332586   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.348583   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0802 18:55:07.349036   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.349454   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.349479   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.349841   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.350094   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.386562   63271 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:55:07.387914   63271 start.go:297] selected driver: kvm2
	I0802 18:55:07.387927   63271 start.go:901] validating driver "kvm2" against &{Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:55:07.388032   63271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:55:07.388727   63271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:55:07.388793   63271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:55:07.403061   63271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:55:07.403460   63271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:55:07.403517   63271 cni.go:84] Creating CNI manager for ""
	I0802 18:55:07.403530   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:55:07.403564   63271 start.go:340] cluster config:
	{Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:55:07.403666   63271 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:55:07.405667   63271 out.go:177] * Starting "embed-certs-757654" primary control-plane node in "embed-certs-757654" cluster
	I0802 18:55:07.406842   63271 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:55:07.406881   63271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 18:55:07.406891   63271 cache.go:56] Caching tarball of preloaded images
	I0802 18:55:07.406977   63271 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:55:07.406989   63271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 18:55:07.407139   63271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/config.json ...
	I0802 18:55:07.407354   63271 start.go:360] acquireMachinesLock for embed-certs-757654: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:55:07.407402   63271 start.go:364] duration metric: took 27.558µs to acquireMachinesLock for "embed-certs-757654"
	I0802 18:55:07.407419   63271 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:55:07.407426   63271 fix.go:54] fixHost starting: 
	I0802 18:55:07.407713   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.407759   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.421857   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0802 18:55:07.422321   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.422811   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.422834   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.423160   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.423321   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.423495   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 18:55:07.424925   63271 fix.go:112] recreateIfNeeded on embed-certs-757654: state=Running err=<nil>
	W0802 18:55:07.424950   63271 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:55:07.427128   63271 out.go:177] * Updating the running kvm2 "embed-certs-757654" VM ...
	I0802 18:55:07.428434   63271 machine.go:94] provisionDockerMachine start ...
	I0802 18:55:07.428462   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.428711   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 18:55:07.431558   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:55:07.432004   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 19:51:03 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 18:55:07.432035   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:55:07.432207   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 18:55:07.432412   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 18:55:07.432600   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 18:55:07.432774   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 18:55:07.432921   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 18:55:07.433139   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 18:55:07.433153   63271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:55:10.331372   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:13.403378   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:19.483421   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:22.555412   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:28.635392   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:31.711303   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:40.827373   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:43.899432   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:49.979406   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:53.051366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:59.131387   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:02.203356   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:08.283365   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:11.355399   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:17.435474   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:20.507366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:26.587339   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:29.659353   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:35.739335   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:38.811375   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:44.891395   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:47.963426   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:56.424677   58571 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0802 18:56:56.424763   58571 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0802 18:56:56.426349   58571 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0802 18:56:56.426400   58571 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:56:56.426486   58571 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:56:56.426574   58571 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:56:56.426653   58571 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:56:56.426705   58571 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:56:56.428652   58571 out.go:204]   - Generating certificates and keys ...
	I0802 18:56:56.428741   58571 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:56:56.428809   58571 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:56:56.428898   58571 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 18:56:56.428972   58571 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 18:56:56.429041   58571 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 18:56:56.429089   58571 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 18:56:56.429161   58571 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 18:56:56.429218   58571 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 18:56:56.429298   58571 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 18:56:56.429380   58571 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 18:56:56.429416   58571 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 18:56:56.429492   58571 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:56:56.429535   58571 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:56:56.429590   58571 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:56:56.429676   58571 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:56:56.429736   58571 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:56:56.429821   58571 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:56:56.429890   58571 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:56:56.429950   58571 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:56:56.430038   58571 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:56:56.431432   58571 out.go:204]   - Booting up control plane ...
	I0802 18:56:56.431529   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:56:56.431650   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:56:56.431737   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:56:56.431820   58571 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:56:56.432000   58571 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 18:56:56.432070   58571 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:56:56.432142   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432320   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432400   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432555   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432625   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432805   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432899   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433090   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433160   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433309   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433316   58571 kubeadm.go:310] 
	I0802 18:56:56.433357   58571 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0802 18:56:56.433389   58571 kubeadm.go:310] 		timed out waiting for the condition
	I0802 18:56:56.433395   58571 kubeadm.go:310] 
	I0802 18:56:56.433430   58571 kubeadm.go:310] 	This error is likely caused by:
	I0802 18:56:56.433471   58571 kubeadm.go:310] 		- The kubelet is not running
	I0802 18:56:56.433602   58571 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0802 18:56:56.433617   58571 kubeadm.go:310] 
	I0802 18:56:56.433748   58571 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0802 18:56:56.433805   58571 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0802 18:56:56.433854   58571 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0802 18:56:56.433863   58571 kubeadm.go:310] 
	I0802 18:56:56.433949   58571 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0802 18:56:56.434017   58571 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0802 18:56:56.434023   58571 kubeadm.go:310] 
	I0802 18:56:56.434150   58571 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0802 18:56:56.434225   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0802 18:56:56.434317   58571 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0802 18:56:56.434408   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0802 18:56:56.434422   58571 kubeadm.go:310] 
	I0802 18:56:56.434487   58571 kubeadm.go:394] duration metric: took 8m0.865897602s to StartCluster
	I0802 18:56:56.434534   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:56:56.434606   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:56:56.480531   58571 cri.go:89] found id: ""
	I0802 18:56:56.480556   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.480564   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:56:56.480570   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:56:56.480622   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:56:56.524218   58571 cri.go:89] found id: ""
	I0802 18:56:56.524249   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.524258   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:56:56.524264   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:56:56.524318   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:56:56.563951   58571 cri.go:89] found id: ""
	I0802 18:56:56.563977   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.563984   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:56:56.563990   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:56:56.564046   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:56:56.600511   58571 cri.go:89] found id: ""
	I0802 18:56:56.600533   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.600540   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:56:56.600545   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:56:56.600607   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:56:56.634000   58571 cri.go:89] found id: ""
	I0802 18:56:56.634024   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.634032   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:56:56.634038   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:56:56.634088   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:56:56.667317   58571 cri.go:89] found id: ""
	I0802 18:56:56.667345   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.667356   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:56:56.667364   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:56:56.667429   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:56:56.698619   58571 cri.go:89] found id: ""
	I0802 18:56:56.698646   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.698656   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:56:56.698664   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:56:56.698726   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:56:56.730196   58571 cri.go:89] found id: ""
	I0802 18:56:56.730222   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.730239   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:56:56.730253   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:56:56.730267   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:56:56.837916   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:56:56.837958   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:56:56.881210   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:56:56.881242   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:56:56.930673   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:56:56.930712   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:56:56.944039   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:56:56.944072   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:56:57.026441   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0802 18:56:57.026505   58571 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0802 18:56:57.026546   58571 out.go:239] * 
	W0802 18:56:57.026632   58571 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.026667   58571 out.go:239] * 
	W0802 18:56:57.027538   58571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:56:57.031093   58571 out.go:177] 
	W0802 18:56:57.032235   58571 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.032305   58571 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0802 18:56:57.032328   58571 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0802 18:56:57.033757   58571 out.go:177] 
	I0802 18:56:54.043379   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:57.115474   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:03.195366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:06.267441   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:12.347367   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:15.419454   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:21.499312   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:24.571479   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:30.651392   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:33.723367   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:39.803308   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:42.875410   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:48.959363   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:52.027390   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:58.107322   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:01.179384   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:07.259377   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:10.331445   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:16.411350   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:19.483337   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:25.563336   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:28.635436   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:34.715391   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:37.787412   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:43.867364   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:46.939415   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	
	
	==> CRI-O <==
	Aug 02 18:49:43 minikube systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:43 minikube systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 02 18:49:51 no-preload-407306 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:51 no-preload-407306 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:58:53Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:58:53Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0802 18:58:53.990115     580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 18:58:53.991755     580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 18:58:53.993306     580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 18:58:53.994679     580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 18:58:53.996236     580 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 2 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052268] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038133] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.175966] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.956805] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +0.895840] overlayfs: failed to resolve '/var/lib/containers/storage/overlay/compat441482906/lower1': -2
	[  +0.695966] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 2 18:50] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> kernel <==
	 18:58:54 up 9 min,  0 users,  load average: 0.00, 0.00, 0.00
	Linux no-preload-407306 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:58:53.632027   64242 logs.go:273] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:58:53Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:58:53Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:58:53.661575   64242 logs.go:273] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:58:53Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:58:53Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:58:53.693771   64242 logs.go:273] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:58:53Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:58:53Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:58:53.723158   64242 logs.go:273] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:58:53Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:58:53Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:58:53.752813   64242 logs.go:273] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:58:53Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:58:53Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:58:53.782258   64242 logs.go:273] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:58:53Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:58:53Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:58:53.813964   64242 logs.go:273] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:58:53Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:58:53Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 18:58:53.842809   64242 logs.go:273] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T18:58:53Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T18:58:53Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T18:58:53Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306: exit status 2 (210.86328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "no-preload-407306" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (541.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-757654 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-757654 --alsologtostderr -v=3: exit status 82 (2m0.476061528s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-757654"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:52:35.788614   62579 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:52:35.788732   62579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:52:35.788743   62579 out.go:304] Setting ErrFile to fd 2...
	I0802 18:52:35.788749   62579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:52:35.788921   62579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:52:35.789151   62579 out.go:298] Setting JSON to false
	I0802 18:52:35.789241   62579 mustload.go:65] Loading cluster: embed-certs-757654
	I0802 18:52:35.789586   62579 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:52:35.789667   62579 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/config.json ...
	I0802 18:52:35.789869   62579 mustload.go:65] Loading cluster: embed-certs-757654
	I0802 18:52:35.789999   62579 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:52:35.790036   62579 stop.go:39] StopHost: embed-certs-757654
	I0802 18:52:35.790431   62579 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:52:35.790487   62579 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:52:35.805677   62579 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
	I0802 18:52:35.806183   62579 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:52:35.806766   62579 main.go:141] libmachine: Using API Version  1
	I0802 18:52:35.806792   62579 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:52:35.807164   62579 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:52:35.810373   62579 out.go:177] * Stopping node "embed-certs-757654"  ...
	I0802 18:52:35.811532   62579 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0802 18:52:35.811570   62579 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:52:35.811783   62579 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0802 18:52:35.811815   62579 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 18:52:35.814713   62579 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:52:35.815194   62579 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 19:51:03 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 18:52:35.815215   62579 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:52:35.815384   62579 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 18:52:35.815562   62579 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 18:52:35.815752   62579 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 18:52:35.815885   62579 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 18:52:35.901200   62579 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0802 18:52:35.960303   62579 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0802 18:52:36.013888   62579 main.go:141] libmachine: Stopping "embed-certs-757654"...
	I0802 18:52:36.013913   62579 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 18:52:36.015891   62579 main.go:141] libmachine: (embed-certs-757654) Calling .Stop
	I0802 18:52:36.020210   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 0/120
	I0802 18:52:37.021657   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 1/120
	I0802 18:52:38.022965   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 2/120
	I0802 18:52:39.024833   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 3/120
	I0802 18:52:40.026125   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 4/120
	I0802 18:52:41.027970   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 5/120
	I0802 18:52:42.029495   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 6/120
	I0802 18:52:43.030719   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 7/120
	I0802 18:52:44.032112   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 8/120
	I0802 18:52:45.033553   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 9/120
	I0802 18:52:46.035808   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 10/120
	I0802 18:52:47.037266   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 11/120
	I0802 18:52:48.038556   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 12/120
	I0802 18:52:49.039896   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 13/120
	I0802 18:52:50.041793   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 14/120
	I0802 18:52:51.043931   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 15/120
	I0802 18:52:52.045550   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 16/120
	I0802 18:52:53.046922   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 17/120
	I0802 18:52:54.048412   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 18/120
	I0802 18:52:55.050109   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 19/120
	I0802 18:52:56.052470   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 20/120
	I0802 18:52:57.054031   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 21/120
	I0802 18:52:58.055395   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 22/120
	I0802 18:52:59.057117   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 23/120
	I0802 18:53:00.058880   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 24/120
	I0802 18:53:01.060937   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 25/120
	I0802 18:53:02.062160   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 26/120
	I0802 18:53:03.063663   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 27/120
	I0802 18:53:04.064986   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 28/120
	I0802 18:53:05.066525   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 29/120
	I0802 18:53:06.068923   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 30/120
	I0802 18:53:07.070602   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 31/120
	I0802 18:53:08.072120   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 32/120
	I0802 18:53:09.073354   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 33/120
	I0802 18:53:10.074819   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 34/120
	I0802 18:53:11.077374   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 35/120
	I0802 18:53:12.079445   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 36/120
	I0802 18:53:13.081724   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 37/120
	I0802 18:53:14.083076   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 38/120
	I0802 18:53:15.084385   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 39/120
	I0802 18:53:16.086413   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 40/120
	I0802 18:53:17.087786   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 41/120
	I0802 18:53:18.089350   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 42/120
	I0802 18:53:19.091010   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 43/120
	I0802 18:53:20.092300   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 44/120
	I0802 18:53:21.094112   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 45/120
	I0802 18:53:22.095621   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 46/120
	I0802 18:53:23.097696   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 47/120
	I0802 18:53:24.099279   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 48/120
	I0802 18:53:25.100882   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 49/120
	I0802 18:53:26.102918   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 50/120
	I0802 18:53:27.104379   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 51/120
	I0802 18:53:28.105697   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 52/120
	I0802 18:53:29.107239   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 53/120
	I0802 18:53:30.108621   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 54/120
	I0802 18:53:31.110406   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 55/120
	I0802 18:53:32.111814   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 56/120
	I0802 18:53:33.114148   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 57/120
	I0802 18:53:34.115541   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 58/120
	I0802 18:53:35.117013   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 59/120
	I0802 18:53:36.119013   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 60/120
	I0802 18:53:37.120497   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 61/120
	I0802 18:53:38.122230   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 62/120
	I0802 18:53:39.123944   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 63/120
	I0802 18:53:40.125466   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 64/120
	I0802 18:53:41.127172   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 65/120
	I0802 18:53:42.128556   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 66/120
	I0802 18:53:43.129829   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 67/120
	I0802 18:53:44.131158   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 68/120
	I0802 18:53:45.132469   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 69/120
	I0802 18:53:46.134662   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 70/120
	I0802 18:53:47.136223   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 71/120
	I0802 18:53:48.137668   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 72/120
	I0802 18:53:49.139011   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 73/120
	I0802 18:53:50.140300   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 74/120
	I0802 18:53:51.142507   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 75/120
	I0802 18:53:52.143905   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 76/120
	I0802 18:53:53.145386   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 77/120
	I0802 18:53:54.146886   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 78/120
	I0802 18:53:55.148287   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 79/120
	I0802 18:53:56.150957   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 80/120
	I0802 18:53:57.152243   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 81/120
	I0802 18:53:58.153968   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 82/120
	I0802 18:53:59.155310   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 83/120
	I0802 18:54:00.156755   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 84/120
	I0802 18:54:01.158791   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 85/120
	I0802 18:54:02.160869   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 86/120
	I0802 18:54:03.162556   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 87/120
	I0802 18:54:04.164150   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 88/120
	I0802 18:54:05.165727   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 89/120
	I0802 18:54:06.167961   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 90/120
	I0802 18:54:07.169318   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 91/120
	I0802 18:54:08.170843   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 92/120
	I0802 18:54:09.172226   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 93/120
	I0802 18:54:10.173713   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 94/120
	I0802 18:54:11.175945   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 95/120
	I0802 18:54:12.177434   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 96/120
	I0802 18:54:13.179069   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 97/120
	I0802 18:54:14.180511   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 98/120
	I0802 18:54:15.181974   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 99/120
	I0802 18:54:16.183730   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 100/120
	I0802 18:54:17.185780   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 101/120
	I0802 18:54:18.187672   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 102/120
	I0802 18:54:19.188964   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 103/120
	I0802 18:54:20.190207   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 104/120
	I0802 18:54:21.192266   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 105/120
	I0802 18:54:22.194055   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 106/120
	I0802 18:54:23.195506   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 107/120
	I0802 18:54:24.197468   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 108/120
	I0802 18:54:25.199122   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 109/120
	I0802 18:54:26.201857   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 110/120
	I0802 18:54:27.203088   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 111/120
	I0802 18:54:28.204912   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 112/120
	I0802 18:54:29.206430   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 113/120
	I0802 18:54:30.207932   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 114/120
	I0802 18:54:31.210031   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 115/120
	I0802 18:54:32.211600   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 116/120
	I0802 18:54:33.212900   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 117/120
	I0802 18:54:34.214149   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 118/120
	I0802 18:54:35.215819   62579 main.go:141] libmachine: (embed-certs-757654) Waiting for machine to stop 119/120
	I0802 18:54:36.216588   62579 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0802 18:54:36.216660   62579 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0802 18:54:36.218382   62579 out.go:177] 
	W0802 18:54:36.219739   62579 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0802 18:54:36.219757   62579 out.go:239] * 
	* 
	W0802 18:54:36.222341   62579 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:54:36.223555   62579 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-757654 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757654 -n embed-certs-757654
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757654 -n embed-certs-757654: exit status 3 (18.650299477s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:54:54.875384   63064 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.74:22: connect: no route to host
	E0802 18:54:54.875406   63064 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.74:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-757654" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-02 19:02:46.856526976 +0000 UTC m=+5787.974694588
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-504903 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-504903 logs -n 25: (1.147213698s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-490984        | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407306                  | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490984             | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504903       | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:53 UTC |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:45 UTC |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-198962             | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-198962                  | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-198962 image list                           | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-684611 | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | disable-driver-mounts-684611                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-757654            | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC | 02 Aug 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-757654                 | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:55:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:55:07.300822   63271 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:55:07.301073   63271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:55:07.301083   63271 out.go:304] Setting ErrFile to fd 2...
	I0802 18:55:07.301087   63271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:55:07.301311   63271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:55:07.301870   63271 out.go:298] Setting JSON to false
	I0802 18:55:07.302787   63271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5851,"bootTime":1722619056,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:55:07.302842   63271 start.go:139] virtualization: kvm guest
	I0802 18:55:07.305206   63271 out.go:177] * [embed-certs-757654] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:55:07.306647   63271 notify.go:220] Checking for updates...
	I0802 18:55:07.306680   63271 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:55:07.308191   63271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:55:07.309618   63271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:55:07.310900   63271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:55:07.312292   63271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:55:07.313676   63271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:55:07.315371   63271 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:55:07.315804   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.315868   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.330686   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0802 18:55:07.331071   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.331554   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.331573   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.331865   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.332028   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.332279   63271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:55:07.332554   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.332586   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.348583   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0802 18:55:07.349036   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.349454   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.349479   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.349841   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.350094   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.386562   63271 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:55:07.387914   63271 start.go:297] selected driver: kvm2
	I0802 18:55:07.387927   63271 start.go:901] validating driver "kvm2" against &{Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:55:07.388032   63271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:55:07.388727   63271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:55:07.388793   63271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:55:07.403061   63271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:55:07.403460   63271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:55:07.403517   63271 cni.go:84] Creating CNI manager for ""
	I0802 18:55:07.403530   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:55:07.403564   63271 start.go:340] cluster config:
	{Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:55:07.403666   63271 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:55:07.405667   63271 out.go:177] * Starting "embed-certs-757654" primary control-plane node in "embed-certs-757654" cluster
	I0802 18:55:07.406842   63271 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:55:07.406881   63271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 18:55:07.406891   63271 cache.go:56] Caching tarball of preloaded images
	I0802 18:55:07.406977   63271 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:55:07.406989   63271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 18:55:07.407139   63271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/config.json ...
	I0802 18:55:07.407354   63271 start.go:360] acquireMachinesLock for embed-certs-757654: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:55:07.407402   63271 start.go:364] duration metric: took 27.558µs to acquireMachinesLock for "embed-certs-757654"
	I0802 18:55:07.407419   63271 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:55:07.407426   63271 fix.go:54] fixHost starting: 
	I0802 18:55:07.407713   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.407759   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.421857   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0802 18:55:07.422321   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.422811   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.422834   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.423160   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.423321   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.423495   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 18:55:07.424925   63271 fix.go:112] recreateIfNeeded on embed-certs-757654: state=Running err=<nil>
	W0802 18:55:07.424950   63271 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:55:07.427128   63271 out.go:177] * Updating the running kvm2 "embed-certs-757654" VM ...
	I0802 18:55:07.428434   63271 machine.go:94] provisionDockerMachine start ...
	I0802 18:55:07.428462   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.428711   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 18:55:07.431558   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:55:07.432004   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 19:51:03 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 18:55:07.432035   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:55:07.432207   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 18:55:07.432412   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 18:55:07.432600   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 18:55:07.432774   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 18:55:07.432921   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 18:55:07.433139   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 18:55:07.433153   63271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:55:10.331372   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:13.403378   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:19.483421   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:22.555412   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:28.635392   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:31.711303   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:40.827373   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:43.899432   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:49.979406   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:53.051366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:59.131387   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:02.203356   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:08.283365   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:11.355399   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:17.435474   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:20.507366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:26.587339   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:29.659353   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:35.739335   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:38.811375   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:44.891395   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:47.963426   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:56.424677   58571 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0802 18:56:56.424763   58571 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0802 18:56:56.426349   58571 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0802 18:56:56.426400   58571 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:56:56.426486   58571 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:56:56.426574   58571 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:56:56.426653   58571 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:56:56.426705   58571 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:56:56.428652   58571 out.go:204]   - Generating certificates and keys ...
	I0802 18:56:56.428741   58571 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:56:56.428809   58571 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:56:56.428898   58571 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 18:56:56.428972   58571 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 18:56:56.429041   58571 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 18:56:56.429089   58571 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 18:56:56.429161   58571 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 18:56:56.429218   58571 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 18:56:56.429298   58571 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 18:56:56.429380   58571 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 18:56:56.429416   58571 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 18:56:56.429492   58571 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:56:56.429535   58571 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:56:56.429590   58571 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:56:56.429676   58571 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:56:56.429736   58571 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:56:56.429821   58571 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:56:56.429890   58571 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:56:56.429950   58571 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:56:56.430038   58571 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:56:56.431432   58571 out.go:204]   - Booting up control plane ...
	I0802 18:56:56.431529   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:56:56.431650   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:56:56.431737   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:56:56.431820   58571 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:56:56.432000   58571 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 18:56:56.432070   58571 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:56:56.432142   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432320   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432400   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432555   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432625   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432805   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432899   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433090   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433160   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433309   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433316   58571 kubeadm.go:310] 
	I0802 18:56:56.433357   58571 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0802 18:56:56.433389   58571 kubeadm.go:310] 		timed out waiting for the condition
	I0802 18:56:56.433395   58571 kubeadm.go:310] 
	I0802 18:56:56.433430   58571 kubeadm.go:310] 	This error is likely caused by:
	I0802 18:56:56.433471   58571 kubeadm.go:310] 		- The kubelet is not running
	I0802 18:56:56.433602   58571 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0802 18:56:56.433617   58571 kubeadm.go:310] 
	I0802 18:56:56.433748   58571 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0802 18:56:56.433805   58571 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0802 18:56:56.433854   58571 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0802 18:56:56.433863   58571 kubeadm.go:310] 
	I0802 18:56:56.433949   58571 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0802 18:56:56.434017   58571 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0802 18:56:56.434023   58571 kubeadm.go:310] 
	I0802 18:56:56.434150   58571 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0802 18:56:56.434225   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0802 18:56:56.434317   58571 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0802 18:56:56.434408   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0802 18:56:56.434422   58571 kubeadm.go:310] 
	I0802 18:56:56.434487   58571 kubeadm.go:394] duration metric: took 8m0.865897602s to StartCluster
	I0802 18:56:56.434534   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:56:56.434606   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:56:56.480531   58571 cri.go:89] found id: ""
	I0802 18:56:56.480556   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.480564   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:56:56.480570   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:56:56.480622   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:56:56.524218   58571 cri.go:89] found id: ""
	I0802 18:56:56.524249   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.524258   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:56:56.524264   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:56:56.524318   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:56:56.563951   58571 cri.go:89] found id: ""
	I0802 18:56:56.563977   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.563984   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:56:56.563990   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:56:56.564046   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:56:56.600511   58571 cri.go:89] found id: ""
	I0802 18:56:56.600533   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.600540   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:56:56.600545   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:56:56.600607   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:56:56.634000   58571 cri.go:89] found id: ""
	I0802 18:56:56.634024   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.634032   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:56:56.634038   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:56:56.634088   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:56:56.667317   58571 cri.go:89] found id: ""
	I0802 18:56:56.667345   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.667356   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:56:56.667364   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:56:56.667429   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:56:56.698619   58571 cri.go:89] found id: ""
	I0802 18:56:56.698646   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.698656   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:56:56.698664   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:56:56.698726   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:56:56.730196   58571 cri.go:89] found id: ""
	I0802 18:56:56.730222   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.730239   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:56:56.730253   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:56:56.730267   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:56:56.837916   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:56:56.837958   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:56:56.881210   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:56:56.881242   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:56:56.930673   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:56:56.930712   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:56:56.944039   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:56:56.944072   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:56:57.026441   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0802 18:56:57.026505   58571 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0802 18:56:57.026546   58571 out.go:239] * 
	W0802 18:56:57.026632   58571 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.026667   58571 out.go:239] * 
	W0802 18:56:57.027538   58571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:56:57.031093   58571 out.go:177] 
	W0802 18:56:57.032235   58571 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.032305   58571 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0802 18:56:57.032328   58571 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0802 18:56:57.033757   58571 out.go:177] 
	I0802 18:56:54.043379   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:57.115474   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:03.195366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:06.267441   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:12.347367   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:15.419454   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:21.499312   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:24.571479   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:30.651392   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:33.723367   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:39.803308   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:42.875410   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:48.959363   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:52.027390   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:58.107322   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:01.179384   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:07.259377   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:10.331445   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:16.411350   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:19.483337   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:25.563336   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:28.635436   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:34.715391   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:37.787412   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:43.867364   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:46.939415   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:53.019307   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:56.091325   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:02.171408   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:05.247378   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:11.323383   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:14.395379   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:20.475380   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:23.547337   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:29.627318   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:32.699366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:38.779353   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:41.851395   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:44.853138   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:59:44.853196   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 18:59:44.853510   63271 buildroot.go:166] provisioning hostname "embed-certs-757654"
	I0802 18:59:44.853536   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 18:59:44.853769   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 18:59:44.855229   63271 machine.go:97] duration metric: took 4m37.426779586s to provisionDockerMachine
	I0802 18:59:44.855272   63271 fix.go:56] duration metric: took 4m37.44784655s for fixHost
	I0802 18:59:44.855280   63271 start.go:83] releasing machines lock for "embed-certs-757654", held for 4m37.44786842s
	W0802 18:59:44.855294   63271 start.go:714] error starting host: provision: host is not running
	W0802 18:59:44.855364   63271 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0802 18:59:44.855373   63271 start.go:729] Will try again in 5 seconds ...
	I0802 18:59:49.856328   63271 start.go:360] acquireMachinesLock for embed-certs-757654: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:59:49.856452   63271 start.go:364] duration metric: took 63.536µs to acquireMachinesLock for "embed-certs-757654"
	I0802 18:59:49.856478   63271 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:59:49.856486   63271 fix.go:54] fixHost starting: 
	I0802 18:59:49.856795   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:59:49.856820   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:59:49.872503   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0802 18:59:49.872935   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:59:49.873429   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:59:49.873455   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:59:49.873775   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:59:49.874015   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:59:49.874138   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 18:59:49.875790   63271 fix.go:112] recreateIfNeeded on embed-certs-757654: state=Stopped err=<nil>
	I0802 18:59:49.875812   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	W0802 18:59:49.875968   63271 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:59:49.877961   63271 out.go:177] * Restarting existing kvm2 VM for "embed-certs-757654" ...
	I0802 18:59:49.879469   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Start
	I0802 18:59:49.879683   63271 main.go:141] libmachine: (embed-certs-757654) Ensuring networks are active...
	I0802 18:59:49.880355   63271 main.go:141] libmachine: (embed-certs-757654) Ensuring network default is active
	I0802 18:59:49.880655   63271 main.go:141] libmachine: (embed-certs-757654) Ensuring network mk-embed-certs-757654 is active
	I0802 18:59:49.881013   63271 main.go:141] libmachine: (embed-certs-757654) Getting domain xml...
	I0802 18:59:49.881644   63271 main.go:141] libmachine: (embed-certs-757654) Creating domain...
	I0802 18:59:51.107468   63271 main.go:141] libmachine: (embed-certs-757654) Waiting to get IP...
	I0802 18:59:51.108364   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.108809   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.108870   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.108788   64474 retry.go:31] will retry after 219.792683ms: waiting for machine to come up
	I0802 18:59:51.330264   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.330775   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.330798   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.330741   64474 retry.go:31] will retry after 346.067172ms: waiting for machine to come up
	I0802 18:59:51.677951   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.678462   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.678504   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.678436   64474 retry.go:31] will retry after 313.108863ms: waiting for machine to come up
	I0802 18:59:51.992934   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.993410   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.993439   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.993354   64474 retry.go:31] will retry after 427.090188ms: waiting for machine to come up
	I0802 18:59:52.421609   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:52.422050   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:52.422080   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:52.422014   64474 retry.go:31] will retry after 577.531979ms: waiting for machine to come up
	I0802 18:59:53.000756   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:53.001336   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:53.001366   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:53.001280   64474 retry.go:31] will retry after 808.196796ms: waiting for machine to come up
	I0802 18:59:53.811289   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:53.811650   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:53.811674   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:53.811600   64474 retry.go:31] will retry after 906.307667ms: waiting for machine to come up
	I0802 18:59:54.720008   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:54.720637   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:54.720667   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:54.720586   64474 retry.go:31] will retry after 951.768859ms: waiting for machine to come up
	I0802 18:59:55.674137   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:55.674555   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:55.674599   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:55.674505   64474 retry.go:31] will retry after 1.653444272s: waiting for machine to come up
	I0802 18:59:57.329527   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:57.329936   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:57.329962   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:57.329899   64474 retry.go:31] will retry after 1.517025614s: waiting for machine to come up
	I0802 18:59:58.848461   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:58.848947   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:58.848991   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:58.848907   64474 retry.go:31] will retry after 1.930384725s: waiting for machine to come up
	I0802 19:00:00.781462   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:00.781935   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 19:00:00.781965   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 19:00:00.781892   64474 retry.go:31] will retry after 3.609517872s: waiting for machine to come up
	I0802 19:00:04.395801   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:04.396325   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 19:00:04.396353   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 19:00:04.396283   64474 retry.go:31] will retry after 4.053197681s: waiting for machine to come up
	I0802 19:00:08.453545   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.454111   63271 main.go:141] libmachine: (embed-certs-757654) Found IP for machine: 192.168.72.74
	I0802 19:00:08.454144   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has current primary IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.454154   63271 main.go:141] libmachine: (embed-certs-757654) Reserving static IP address...
	I0802 19:00:08.454669   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "embed-certs-757654", mac: "52:54:00:d5:0f:4c", ip: "192.168.72.74"} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.454695   63271 main.go:141] libmachine: (embed-certs-757654) DBG | skip adding static IP to network mk-embed-certs-757654 - found existing host DHCP lease matching {name: "embed-certs-757654", mac: "52:54:00:d5:0f:4c", ip: "192.168.72.74"}
	I0802 19:00:08.454709   63271 main.go:141] libmachine: (embed-certs-757654) Reserved static IP address: 192.168.72.74
	I0802 19:00:08.454723   63271 main.go:141] libmachine: (embed-certs-757654) Waiting for SSH to be available...
	I0802 19:00:08.454741   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Getting to WaitForSSH function...
	I0802 19:00:08.457106   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.457426   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.457477   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.457594   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Using SSH client type: external
	I0802 19:00:08.457622   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa (-rw-------)
	I0802 19:00:08.457655   63271 main.go:141] libmachine: (embed-certs-757654) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 19:00:08.457671   63271 main.go:141] libmachine: (embed-certs-757654) DBG | About to run SSH command:
	I0802 19:00:08.457689   63271 main.go:141] libmachine: (embed-certs-757654) DBG | exit 0
	I0802 19:00:08.583153   63271 main.go:141] libmachine: (embed-certs-757654) DBG | SSH cmd err, output: <nil>: 
	I0802 19:00:08.583546   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetConfigRaw
	I0802 19:00:08.584156   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:08.586987   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.587373   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.587403   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.587628   63271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/config.json ...
	I0802 19:00:08.587836   63271 machine.go:94] provisionDockerMachine start ...
	I0802 19:00:08.587858   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:08.588062   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.590424   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.590765   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.590790   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.590889   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:08.591079   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.591258   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.591427   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:08.591610   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:08.591800   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:08.591815   63271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 19:00:08.699598   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0802 19:00:08.699631   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 19:00:08.699874   63271 buildroot.go:166] provisioning hostname "embed-certs-757654"
	I0802 19:00:08.699905   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 19:00:08.700064   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.702828   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.703221   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.703250   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.703426   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:08.703600   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.703751   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.703891   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:08.704036   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:08.704249   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:08.704267   63271 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-757654 && echo "embed-certs-757654" | sudo tee /etc/hostname
	I0802 19:00:08.825824   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-757654
	
	I0802 19:00:08.825854   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.828688   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.829029   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.829059   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.829236   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:08.829456   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.829603   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.829752   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:08.829933   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:08.830107   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:08.830124   63271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-757654' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-757654/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-757654' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 19:00:08.949050   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:00:08.949088   63271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 19:00:08.949109   63271 buildroot.go:174] setting up certificates
	I0802 19:00:08.949117   63271 provision.go:84] configureAuth start
	I0802 19:00:08.949135   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 19:00:08.949433   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:08.952237   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.952545   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.952573   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.952723   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.954970   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.955440   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.955468   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.955644   63271 provision.go:143] copyHostCerts
	I0802 19:00:08.955696   63271 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 19:00:08.955706   63271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 19:00:08.955801   63271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 19:00:08.955926   63271 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 19:00:08.955939   63271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 19:00:08.955970   63271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 19:00:08.956043   63271 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 19:00:08.956051   63271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 19:00:08.956074   63271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 19:00:08.956136   63271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.embed-certs-757654 san=[127.0.0.1 192.168.72.74 embed-certs-757654 localhost minikube]
	I0802 19:00:09.274751   63271 provision.go:177] copyRemoteCerts
	I0802 19:00:09.274811   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 19:00:09.274833   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.277417   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.277757   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.277782   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.277937   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.278139   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.278307   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.278429   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:09.360988   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 19:00:09.383169   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0802 19:00:09.406422   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 19:00:09.430412   63271 provision.go:87] duration metric: took 481.276691ms to configureAuth
	I0802 19:00:09.430474   63271 buildroot.go:189] setting minikube options for container-runtime
	I0802 19:00:09.430718   63271 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:00:09.430812   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.433678   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.434068   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.434097   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.434234   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.434458   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.434631   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.434768   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.434952   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:09.435197   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:09.435220   63271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 19:00:09.694497   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 19:00:09.694540   63271 machine.go:97] duration metric: took 1.10669177s to provisionDockerMachine
	I0802 19:00:09.694555   63271 start.go:293] postStartSetup for "embed-certs-757654" (driver="kvm2")
	I0802 19:00:09.694566   63271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 19:00:09.694586   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.694913   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 19:00:09.694938   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.697387   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.697722   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.697765   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.697828   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.698011   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.698159   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.698280   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:09.781383   63271 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 19:00:09.785521   63271 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 19:00:09.785555   63271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 19:00:09.785639   63271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 19:00:09.785760   63271 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 19:00:09.785891   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 19:00:09.796028   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:00:09.820115   63271 start.go:296] duration metric: took 125.544407ms for postStartSetup
	I0802 19:00:09.820156   63271 fix.go:56] duration metric: took 19.963670883s for fixHost
	I0802 19:00:09.820175   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.823086   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.823387   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.823427   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.823600   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.823881   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.824077   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.824217   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.824403   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:09.824616   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:09.824627   63271 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 19:00:09.931624   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722625209.908806442
	
	I0802 19:00:09.931652   63271 fix.go:216] guest clock: 1722625209.908806442
	I0802 19:00:09.931660   63271 fix.go:229] Guest: 2024-08-02 19:00:09.908806442 +0000 UTC Remote: 2024-08-02 19:00:09.82015998 +0000 UTC m=+302.554066499 (delta=88.646462ms)
	I0802 19:00:09.931680   63271 fix.go:200] guest clock delta is within tolerance: 88.646462ms
	I0802 19:00:09.931686   63271 start.go:83] releasing machines lock for "embed-certs-757654", held for 20.075223098s
	I0802 19:00:09.931706   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.931993   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:09.934694   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.935023   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.935067   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.935214   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.935703   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.935866   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.935961   63271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 19:00:09.936013   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.936079   63271 ssh_runner.go:195] Run: cat /version.json
	I0802 19:00:09.936100   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.938619   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.938973   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.938996   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.939017   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.939183   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.939346   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.939541   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.939546   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.939566   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.939733   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.939753   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:09.939839   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.939986   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.940143   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:10.060439   63271 ssh_runner.go:195] Run: systemctl --version
	I0802 19:00:10.066688   63271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 19:00:10.209783   63271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 19:00:10.215441   63271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 19:00:10.215530   63271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 19:00:10.230786   63271 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 19:00:10.230808   63271 start.go:495] detecting cgroup driver to use...
	I0802 19:00:10.230894   63271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 19:00:10.246480   63271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 19:00:10.260637   63271 docker.go:217] disabling cri-docker service (if available) ...
	I0802 19:00:10.260694   63271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 19:00:10.273890   63271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 19:00:10.286949   63271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 19:00:10.396045   63271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 19:00:10.558766   63271 docker.go:233] disabling docker service ...
	I0802 19:00:10.558830   63271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 19:00:10.572592   63271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 19:00:10.585221   63271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 19:00:10.711072   63271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 19:00:10.831806   63271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 19:00:10.853846   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 19:00:10.871644   63271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 19:00:10.871703   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.881356   63271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 19:00:10.881415   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.891537   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.901976   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.911415   63271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 19:00:10.921604   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.931914   63271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.948828   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.958456   63271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 19:00:10.967234   63271 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 19:00:10.967291   63271 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 19:00:10.980348   63271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 19:00:10.989378   63271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:00:11.105254   63271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 19:00:11.241019   63271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 19:00:11.241094   63271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 19:00:11.245512   63271 start.go:563] Will wait 60s for crictl version
	I0802 19:00:11.245560   63271 ssh_runner.go:195] Run: which crictl
	I0802 19:00:11.249126   63271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 19:00:11.287138   63271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 19:00:11.287233   63271 ssh_runner.go:195] Run: crio --version
	I0802 19:00:11.316821   63271 ssh_runner.go:195] Run: crio --version
	I0802 19:00:11.344756   63271 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 19:00:11.346052   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:11.348613   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:11.349012   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:11.349040   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:11.349288   63271 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0802 19:00:11.353165   63271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:00:11.364518   63271 kubeadm.go:883] updating cluster {Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 19:00:11.364682   63271 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:00:11.364743   63271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:00:11.399565   63271 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 19:00:11.399667   63271 ssh_runner.go:195] Run: which lz4
	I0802 19:00:11.403250   63271 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 19:00:11.406951   63271 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 19:00:11.406982   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 19:00:12.658177   63271 crio.go:462] duration metric: took 1.254950494s to copy over tarball
	I0802 19:00:12.658258   63271 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 19:00:14.794602   63271 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.136306374s)
	I0802 19:00:14.794636   63271 crio.go:469] duration metric: took 2.136431079s to extract the tarball
	I0802 19:00:14.794644   63271 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 19:00:14.831660   63271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:00:14.871909   63271 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 19:00:14.871931   63271 cache_images.go:84] Images are preloaded, skipping loading
	I0802 19:00:14.871939   63271 kubeadm.go:934] updating node { 192.168.72.74 8443 v1.30.3 crio true true} ...
	I0802 19:00:14.872057   63271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-757654 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 19:00:14.872134   63271 ssh_runner.go:195] Run: crio config
	I0802 19:00:14.921874   63271 cni.go:84] Creating CNI manager for ""
	I0802 19:00:14.921937   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:00:14.921952   63271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 19:00:14.921978   63271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.74 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-757654 NodeName:embed-certs-757654 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 19:00:14.922146   63271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-757654"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 19:00:14.922224   63271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 19:00:14.931751   63271 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 19:00:14.931818   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 19:00:14.942115   63271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0802 19:00:14.959155   63271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 19:00:14.977137   63271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0802 19:00:14.994426   63271 ssh_runner.go:195] Run: grep 192.168.72.74	control-plane.minikube.internal$ /etc/hosts
	I0802 19:00:14.997882   63271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:00:15.009925   63271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:00:15.117317   63271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:00:15.133773   63271 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654 for IP: 192.168.72.74
	I0802 19:00:15.133798   63271 certs.go:194] generating shared ca certs ...
	I0802 19:00:15.133815   63271 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:00:15.133986   63271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 19:00:15.134036   63271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 19:00:15.134044   63271 certs.go:256] generating profile certs ...
	I0802 19:00:15.134174   63271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/client.key
	I0802 19:00:15.134268   63271 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/apiserver.key.edfbb872
	I0802 19:00:15.134321   63271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/proxy-client.key
	I0802 19:00:15.134471   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 19:00:15.134513   63271 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 19:00:15.134523   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 19:00:15.134559   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 19:00:15.134592   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 19:00:15.134629   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 19:00:15.134680   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:00:15.135580   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 19:00:15.166676   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 19:00:15.198512   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 19:00:15.222007   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 19:00:15.256467   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0802 19:00:15.282024   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 19:00:15.313750   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 19:00:15.336950   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 19:00:15.361688   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 19:00:15.385790   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 19:00:15.407897   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 19:00:15.432712   63271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 19:00:15.450086   63271 ssh_runner.go:195] Run: openssl version
	I0802 19:00:15.455897   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 19:00:15.466553   63271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 19:00:15.470703   63271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 19:00:15.470764   63271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 19:00:15.476433   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 19:00:15.486297   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 19:00:15.497188   63271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 19:00:15.501643   63271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 19:00:15.501712   63271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 19:00:15.507198   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 19:00:15.517747   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 19:00:15.528337   63271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:00:15.532658   63271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:00:15.532704   63271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:00:15.537982   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 19:00:15.547569   63271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 19:00:15.551539   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 19:00:15.556863   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 19:00:15.562004   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 19:00:15.567611   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 19:00:15.572837   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 19:00:15.577902   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 19:00:15.583126   63271 kubeadm.go:392] StartCluster: {Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:00:15.583255   63271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 19:00:15.583325   63271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 19:00:15.618245   63271 cri.go:89] found id: ""
	I0802 19:00:15.618324   63271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 19:00:15.627752   63271 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0802 19:00:15.627774   63271 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0802 19:00:15.627830   63271 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0802 19:00:15.636794   63271 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0802 19:00:15.637893   63271 kubeconfig.go:125] found "embed-certs-757654" server: "https://192.168.72.74:8443"
	I0802 19:00:15.640011   63271 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0802 19:00:15.649091   63271 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.74
	I0802 19:00:15.649122   63271 kubeadm.go:1160] stopping kube-system containers ...
	I0802 19:00:15.649135   63271 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0802 19:00:15.649199   63271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 19:00:15.688167   63271 cri.go:89] found id: ""
	I0802 19:00:15.688231   63271 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0802 19:00:15.707188   63271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 19:00:15.717501   63271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 19:00:15.717523   63271 kubeadm.go:157] found existing configuration files:
	
	I0802 19:00:15.717564   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 19:00:15.726600   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 19:00:15.726648   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 19:00:15.736483   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 19:00:15.745075   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 19:00:15.745137   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 19:00:15.754027   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 19:00:15.762600   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 19:00:15.762650   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 19:00:15.771220   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 19:00:15.779384   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 19:00:15.779450   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 19:00:15.788081   63271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 19:00:15.796772   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:15.902347   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.011025   63271 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108635171s)
	I0802 19:00:17.011068   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.229454   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.302558   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.405239   63271 api_server.go:52] waiting for apiserver process to appear ...
	I0802 19:00:17.405325   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:17.905496   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:18.405716   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:18.906507   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:19.405762   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:19.905447   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:19.920906   63271 api_server.go:72] duration metric: took 2.515676455s to wait for apiserver process to appear ...
	I0802 19:00:19.920938   63271 api_server.go:88] waiting for apiserver healthz status ...
	I0802 19:00:19.920965   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.287856   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0802 19:00:22.287881   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0802 19:00:22.287893   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.328293   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0802 19:00:22.328340   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0802 19:00:22.421484   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.426448   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 19:00:22.426493   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 19:00:22.921227   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.925796   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 19:00:22.925830   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 19:00:23.421392   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:23.426450   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 19:00:23.426474   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 19:00:23.921015   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:23.925369   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 200:
	ok
	I0802 19:00:23.931827   63271 api_server.go:141] control plane version: v1.30.3
	I0802 19:00:23.931850   63271 api_server.go:131] duration metric: took 4.010904656s to wait for apiserver health ...
	I0802 19:00:23.931860   63271 cni.go:84] Creating CNI manager for ""
	I0802 19:00:23.931869   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:00:23.933936   63271 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 19:00:23.935422   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 19:00:23.946751   63271 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 19:00:23.965059   63271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 19:00:23.976719   63271 system_pods.go:59] 8 kube-system pods found
	I0802 19:00:23.976770   63271 system_pods.go:61] "coredns-7db6d8ff4d-dldmc" [fd66a301-73a8-4c3a-9a3c-813d9940c233] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:00:23.976782   63271 system_pods.go:61] "etcd-embed-certs-757654" [5644c343-74c1-4b35-8700-0f75991c1227] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0802 19:00:23.976793   63271 system_pods.go:61] "kube-apiserver-embed-certs-757654" [726eda65-25be-4f4d-9322-e8c285df16b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0802 19:00:23.976801   63271 system_pods.go:61] "kube-controller-manager-embed-certs-757654" [aa23470d-fb61-4a05-ad70-afa56cb3439c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0802 19:00:23.976808   63271 system_pods.go:61] "kube-proxy-k8lnc" [8cedcb95-3796-4c88-9980-74f75e1240f6] Running
	I0802 19:00:23.976816   63271 system_pods.go:61] "kube-scheduler-embed-certs-757654" [1f3f3c29-c680-44d8-8d6f-76a6d5f99eca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0802 19:00:23.976824   63271 system_pods.go:61] "metrics-server-569cc877fc-8nfts" [fed56acf-7b52-4414-a3cd-003d769368a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0802 19:00:23.976830   63271 system_pods.go:61] "storage-provisioner" [b9e24584-d431-431e-a0ce-4e10c8ed28e7] Running
	I0802 19:00:23.976842   63271 system_pods.go:74] duration metric: took 11.758424ms to wait for pod list to return data ...
	I0802 19:00:23.976851   63271 node_conditions.go:102] verifying NodePressure condition ...
	I0802 19:00:23.980046   63271 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 19:00:23.980077   63271 node_conditions.go:123] node cpu capacity is 2
	I0802 19:00:23.980091   63271 node_conditions.go:105] duration metric: took 3.224494ms to run NodePressure ...
	I0802 19:00:23.980110   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:24.244478   63271 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0802 19:00:24.248352   63271 kubeadm.go:739] kubelet initialised
	I0802 19:00:24.248371   63271 kubeadm.go:740] duration metric: took 3.863328ms waiting for restarted kubelet to initialise ...
	I0802 19:00:24.248380   63271 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:00:24.260573   63271 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:26.266305   63271 pod_ready.go:102] pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:28.267770   63271 pod_ready.go:92] pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:28.267794   63271 pod_ready.go:81] duration metric: took 4.007193958s for pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:28.267804   63271 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:30.281164   63271 pod_ready.go:102] pod "etcd-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:30.775554   63271 pod_ready.go:92] pod "etcd-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:30.775577   63271 pod_ready.go:81] duration metric: took 2.507766234s for pod "etcd-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:30.775587   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:31.280678   63271 pod_ready.go:92] pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:31.280706   63271 pod_ready.go:81] duration metric: took 505.111529ms for pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:31.280718   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:33.285821   63271 pod_ready.go:102] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:35.786849   63271 pod_ready.go:102] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:37.787600   63271 pod_ready.go:102] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:38.286212   63271 pod_ready.go:92] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:38.286238   63271 pod_ready.go:81] duration metric: took 7.005511802s for pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.286251   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k8lnc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.290785   63271 pod_ready.go:92] pod "kube-proxy-k8lnc" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:38.290808   63271 pod_ready.go:81] duration metric: took 4.549071ms for pod "kube-proxy-k8lnc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.290819   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.294795   63271 pod_ready.go:92] pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:38.294818   63271 pod_ready.go:81] duration metric: took 3.989197ms for pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.294827   63271 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:40.301046   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:42.800745   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:45.300974   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:47.301922   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:49.800527   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:51.801849   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:54.301458   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:56.801027   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:59.300566   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:01.301544   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:03.801351   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:05.801445   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:08.300706   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:10.801090   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:13.302416   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:15.801900   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:18.301115   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:20.801699   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:23.301191   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:25.801392   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:28.300859   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:30.303055   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:32.801185   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:35.300663   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:37.800850   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:39.801554   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:42.299824   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:44.300915   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:46.301116   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:48.801022   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:50.801265   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:53.301815   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:55.804154   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:58.306260   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:00.800350   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:02.801306   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:04.801767   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:06.801850   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:09.300911   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:11.801540   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:13.801899   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:16.301139   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:18.801264   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:20.801310   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:22.801602   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:25.300418   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:27.800576   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:29.801107   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:32.300367   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:34.301544   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:36.800348   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:38.800863   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:41.301210   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.449063985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625367449039139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3f0ff3a-0ff5-4484-85be-e6c4211774ce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.449707513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61b28e0c-8a70-40af-9da4-c0662709ea93 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.449759038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61b28e0c-8a70-40af-9da4-c0662709ea93 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.450108428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98515615127ff0a1a90381d1a238540b1929d298f4caf66692b3949cef1fda31,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722624591099900014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e253ad56fe42192507b13134336b8ac2a8efdb23290d5101d4f02de146e1de57,PodSandboxId:d8938fd13053c270cba5bf078ee0fd24fcb5363396ba5660dde946cbe9fe632c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722624570972680760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6464f4d-f98c-4dfd-95d1-f5db6f710d13,},Annotations:map[string]string{io.kubernetes.container.hash: b114671f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8510f8b4108229ffcace233ca3c5b5d6c4e9bf1f7bd2a057ee5f0d7c320dc85,PodSandboxId:704466c74825e438a291e8f89401f628521a09ea1739774edabeb86a8fcbc4b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722624567896950144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k46j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aedd5c3-6afd-4c1d-acec-e90822891130,},Annotations:map[string]string{io.kubernetes.container.hash: f3d017fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead1da5f29baa20b541ab5bcdbe966c3ec0c229d7da11b5030d116076811c462,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722624560314888259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9090ed318c1c60d350f923d25db10bce0c8c36bbd2209d04cafd353cce67e7,PodSandboxId:ab12399ad0296a11694422bcc11cea822d740f3c87a03cf589589da4d791f506,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722624560277314483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dfq8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230df431-7597-403f-a0db
-88f4c99077c8,},Annotations:map[string]string{io.kubernetes.container.hash: 54f58de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071fbeaa4252c36bc433759764d2b31fdf184811455485c16dce8eec63263537,PodSandboxId:bda46890c78a9e12867bfa1f19ada2ee39a406297324038d0667f9a6dc8a8727,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722624556762806262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: e4ee491a71c484abb6b84c3384f6b3f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fef22170bccce8cbbe2f21c3857b13d0679e863ad490238c25659e8cd61194,PodSandboxId:d50c8574a1768b46ba052074c7079e80b9c734f2b1c851b65533c0ba4d9f4824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722624556745790439,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0075b7102f3d4859e622f7449072e1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3b871f33afdf83833630439428d1277d90afbaa6a2c7823f3480c7848ea02e,PodSandboxId:4886e9279f8a3d62eea177a867c7ca71e43fe189b085e100d05a0512e7fbe7b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722624556757595985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4735360417a891053493ebfd7525266,},Annotations:map[string]string{io.kubernetes.container.hash: b5c119bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3357056080401eb5e08b9c8e4bc3030c07228d3eeabc5e6b3e9160b511ffb2,PodSandboxId:0d282d253e03fd877cfa2af7583be727cfba826b9d2abe3caa5fea595648e3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722624556730718256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67d341f799b4dc1b5f52e6859a81b6
93,},Annotations:map[string]string{io.kubernetes.container.hash: f3f41259,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61b28e0c-8a70-40af-9da4-c0662709ea93 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.484584415Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fcf2b784-bc85-4c67-9fd2-00184b9629b8 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.484696181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fcf2b784-bc85-4c67-9fd2-00184b9629b8 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.486481713Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4f76090-e816-46b3-aeda-94086903dd22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.487682173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625367487604688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4f76090-e816-46b3-aeda-94086903dd22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.488812038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=788b656f-e31a-4471-8823-45e084349cb0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.488889255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=788b656f-e31a-4471-8823-45e084349cb0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.489163927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98515615127ff0a1a90381d1a238540b1929d298f4caf66692b3949cef1fda31,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722624591099900014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e253ad56fe42192507b13134336b8ac2a8efdb23290d5101d4f02de146e1de57,PodSandboxId:d8938fd13053c270cba5bf078ee0fd24fcb5363396ba5660dde946cbe9fe632c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722624570972680760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6464f4d-f98c-4dfd-95d1-f5db6f710d13,},Annotations:map[string]string{io.kubernetes.container.hash: b114671f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8510f8b4108229ffcace233ca3c5b5d6c4e9bf1f7bd2a057ee5f0d7c320dc85,PodSandboxId:704466c74825e438a291e8f89401f628521a09ea1739774edabeb86a8fcbc4b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722624567896950144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k46j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aedd5c3-6afd-4c1d-acec-e90822891130,},Annotations:map[string]string{io.kubernetes.container.hash: f3d017fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead1da5f29baa20b541ab5bcdbe966c3ec0c229d7da11b5030d116076811c462,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722624560314888259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9090ed318c1c60d350f923d25db10bce0c8c36bbd2209d04cafd353cce67e7,PodSandboxId:ab12399ad0296a11694422bcc11cea822d740f3c87a03cf589589da4d791f506,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722624560277314483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dfq8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230df431-7597-403f-a0db
-88f4c99077c8,},Annotations:map[string]string{io.kubernetes.container.hash: 54f58de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071fbeaa4252c36bc433759764d2b31fdf184811455485c16dce8eec63263537,PodSandboxId:bda46890c78a9e12867bfa1f19ada2ee39a406297324038d0667f9a6dc8a8727,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722624556762806262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: e4ee491a71c484abb6b84c3384f6b3f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fef22170bccce8cbbe2f21c3857b13d0679e863ad490238c25659e8cd61194,PodSandboxId:d50c8574a1768b46ba052074c7079e80b9c734f2b1c851b65533c0ba4d9f4824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722624556745790439,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0075b7102f3d4859e622f7449072e1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3b871f33afdf83833630439428d1277d90afbaa6a2c7823f3480c7848ea02e,PodSandboxId:4886e9279f8a3d62eea177a867c7ca71e43fe189b085e100d05a0512e7fbe7b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722624556757595985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4735360417a891053493ebfd7525266,},Annotations:map[string]string{io.kubernetes.container.hash: b5c119bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3357056080401eb5e08b9c8e4bc3030c07228d3eeabc5e6b3e9160b511ffb2,PodSandboxId:0d282d253e03fd877cfa2af7583be727cfba826b9d2abe3caa5fea595648e3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722624556730718256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67d341f799b4dc1b5f52e6859a81b6
93,},Annotations:map[string]string{io.kubernetes.container.hash: f3f41259,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=788b656f-e31a-4471-8823-45e084349cb0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.529701368Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=641cff37-2e2d-4d4c-85f7-117fde8e0939 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.529830065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=641cff37-2e2d-4d4c-85f7-117fde8e0939 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.531509296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb0bcbc5-14e4-4468-ae11-7c7f6ff57bde name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.532165274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625367532129671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb0bcbc5-14e4-4468-ae11-7c7f6ff57bde name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.532861144Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4082a8f6-2e1e-4835-94eb-9b4538255070 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.532960988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4082a8f6-2e1e-4835-94eb-9b4538255070 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.533323455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98515615127ff0a1a90381d1a238540b1929d298f4caf66692b3949cef1fda31,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722624591099900014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e253ad56fe42192507b13134336b8ac2a8efdb23290d5101d4f02de146e1de57,PodSandboxId:d8938fd13053c270cba5bf078ee0fd24fcb5363396ba5660dde946cbe9fe632c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722624570972680760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6464f4d-f98c-4dfd-95d1-f5db6f710d13,},Annotations:map[string]string{io.kubernetes.container.hash: b114671f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8510f8b4108229ffcace233ca3c5b5d6c4e9bf1f7bd2a057ee5f0d7c320dc85,PodSandboxId:704466c74825e438a291e8f89401f628521a09ea1739774edabeb86a8fcbc4b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722624567896950144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k46j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aedd5c3-6afd-4c1d-acec-e90822891130,},Annotations:map[string]string{io.kubernetes.container.hash: f3d017fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead1da5f29baa20b541ab5bcdbe966c3ec0c229d7da11b5030d116076811c462,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722624560314888259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9090ed318c1c60d350f923d25db10bce0c8c36bbd2209d04cafd353cce67e7,PodSandboxId:ab12399ad0296a11694422bcc11cea822d740f3c87a03cf589589da4d791f506,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722624560277314483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dfq8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230df431-7597-403f-a0db
-88f4c99077c8,},Annotations:map[string]string{io.kubernetes.container.hash: 54f58de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071fbeaa4252c36bc433759764d2b31fdf184811455485c16dce8eec63263537,PodSandboxId:bda46890c78a9e12867bfa1f19ada2ee39a406297324038d0667f9a6dc8a8727,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722624556762806262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: e4ee491a71c484abb6b84c3384f6b3f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fef22170bccce8cbbe2f21c3857b13d0679e863ad490238c25659e8cd61194,PodSandboxId:d50c8574a1768b46ba052074c7079e80b9c734f2b1c851b65533c0ba4d9f4824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722624556745790439,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0075b7102f3d4859e622f7449072e1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3b871f33afdf83833630439428d1277d90afbaa6a2c7823f3480c7848ea02e,PodSandboxId:4886e9279f8a3d62eea177a867c7ca71e43fe189b085e100d05a0512e7fbe7b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722624556757595985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4735360417a891053493ebfd7525266,},Annotations:map[string]string{io.kubernetes.container.hash: b5c119bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3357056080401eb5e08b9c8e4bc3030c07228d3eeabc5e6b3e9160b511ffb2,PodSandboxId:0d282d253e03fd877cfa2af7583be727cfba826b9d2abe3caa5fea595648e3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722624556730718256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67d341f799b4dc1b5f52e6859a81b6
93,},Annotations:map[string]string{io.kubernetes.container.hash: f3f41259,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4082a8f6-2e1e-4835-94eb-9b4538255070 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.566353006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a88ebf83-9d42-407b-9955-f7ed76ca3c35 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.566444341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a88ebf83-9d42-407b-9955-f7ed76ca3c35 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.567893341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c6b0a6b-04b1-4513-aaa7-33c7f420e133 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.568678055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625367568636598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c6b0a6b-04b1-4513-aaa7-33c7f420e133 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.569627220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adf4fd6b-b486-4973-b596-66a4cc1a2481 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.569729661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adf4fd6b-b486-4973-b596-66a4cc1a2481 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:02:47 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:02:47.570093851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98515615127ff0a1a90381d1a238540b1929d298f4caf66692b3949cef1fda31,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722624591099900014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e253ad56fe42192507b13134336b8ac2a8efdb23290d5101d4f02de146e1de57,PodSandboxId:d8938fd13053c270cba5bf078ee0fd24fcb5363396ba5660dde946cbe9fe632c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722624570972680760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6464f4d-f98c-4dfd-95d1-f5db6f710d13,},Annotations:map[string]string{io.kubernetes.container.hash: b114671f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8510f8b4108229ffcace233ca3c5b5d6c4e9bf1f7bd2a057ee5f0d7c320dc85,PodSandboxId:704466c74825e438a291e8f89401f628521a09ea1739774edabeb86a8fcbc4b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722624567896950144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k46j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aedd5c3-6afd-4c1d-acec-e90822891130,},Annotations:map[string]string{io.kubernetes.container.hash: f3d017fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead1da5f29baa20b541ab5bcdbe966c3ec0c229d7da11b5030d116076811c462,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722624560314888259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9090ed318c1c60d350f923d25db10bce0c8c36bbd2209d04cafd353cce67e7,PodSandboxId:ab12399ad0296a11694422bcc11cea822d740f3c87a03cf589589da4d791f506,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722624560277314483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dfq8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230df431-7597-403f-a0db
-88f4c99077c8,},Annotations:map[string]string{io.kubernetes.container.hash: 54f58de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071fbeaa4252c36bc433759764d2b31fdf184811455485c16dce8eec63263537,PodSandboxId:bda46890c78a9e12867bfa1f19ada2ee39a406297324038d0667f9a6dc8a8727,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722624556762806262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: e4ee491a71c484abb6b84c3384f6b3f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fef22170bccce8cbbe2f21c3857b13d0679e863ad490238c25659e8cd61194,PodSandboxId:d50c8574a1768b46ba052074c7079e80b9c734f2b1c851b65533c0ba4d9f4824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722624556745790439,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0075b7102f3d4859e622f7449072e1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3b871f33afdf83833630439428d1277d90afbaa6a2c7823f3480c7848ea02e,PodSandboxId:4886e9279f8a3d62eea177a867c7ca71e43fe189b085e100d05a0512e7fbe7b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722624556757595985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4735360417a891053493ebfd7525266,},Annotations:map[string]string{io.kubernetes.container.hash: b5c119bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3357056080401eb5e08b9c8e4bc3030c07228d3eeabc5e6b3e9160b511ffb2,PodSandboxId:0d282d253e03fd877cfa2af7583be727cfba826b9d2abe3caa5fea595648e3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722624556730718256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67d341f799b4dc1b5f52e6859a81b6
93,},Annotations:map[string]string{io.kubernetes.container.hash: f3f41259,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adf4fd6b-b486-4973-b596-66a4cc1a2481 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	98515615127ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   c2c06fa11f752       storage-provisioner
	e253ad56fe421       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   d8938fd13053c       busybox
	d8510f8b41082       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   704466c74825e       coredns-7db6d8ff4d-k46j2
	ead1da5f29baa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   c2c06fa11f752       storage-provisioner
	1d9090ed318c1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago      Running             kube-proxy                1                   ab12399ad0296       kube-proxy-dfq8b
	071fbeaa4252c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      13 minutes ago      Running             kube-controller-manager   1                   bda46890c78a9       kube-controller-manager-default-k8s-diff-port-504903
	8a3b871f33afd       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      13 minutes ago      Running             kube-apiserver            1                   4886e9279f8a3       kube-apiserver-default-k8s-diff-port-504903
	54fef22170bcc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago      Running             kube-scheduler            1                   d50c8574a1768       kube-scheduler-default-k8s-diff-port-504903
	3c33570560804       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   0d282d253e03f       etcd-default-k8s-diff-port-504903
	
	
	==> coredns [d8510f8b4108229ffcace233ca3c5b5d6c4e9bf1f7bd2a057ee5f0d7c320dc85] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49594 - 49242 "HINFO IN 3514459592400852423.5345974895971787697. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01186339s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-504903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-504903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=default-k8s-diff-port-504903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T18_41_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:41:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-504903
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 19:02:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 19:00:01 +0000   Fri, 02 Aug 2024 18:41:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 19:00:01 +0000   Fri, 02 Aug 2024 18:41:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 19:00:01 +0000   Fri, 02 Aug 2024 18:41:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 19:00:01 +0000   Fri, 02 Aug 2024 18:49:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.183
	  Hostname:    default-k8s-diff-port-504903
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 91cb828529e14304a21266cb2b67ace8
	  System UUID:                91cb8285-29e1-4304-a212-66cb2b67ace8
	  Boot ID:                    97c0214f-7b2f-4890-b6ff-13cd401e038f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-k46j2                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-default-k8s-diff-port-504903                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-504903             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-504903    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-dfq8b                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-504903             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-569cc877fc-pw5tt                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-504903 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-504903 event: Registered Node default-k8s-diff-port-504903 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-504903 event: Registered Node default-k8s-diff-port-504903 in Controller
	
	
	==> dmesg <==
	[Aug 2 18:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051873] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037524] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.871149] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.807454] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.493783] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 2 18:49] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.063128] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058743] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.196163] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.120913] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.269957] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.126810] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +1.689698] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[  +0.060987] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.527787] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.372051] systemd-fstab-generator[1537]: Ignoring "noauto" option for root device
	[  +3.341950] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.078763] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [3c3357056080401eb5e08b9c8e4bc3030c07228d3eeabc5e6b3e9160b511ffb2] <==
	{"level":"info","ts":"2024-08-02T18:49:18.457862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-02T18:49:18.457892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 received MsgPreVoteResp from 378cdee1d1b27193 at term 2"}
	{"level":"info","ts":"2024-08-02T18:49:18.457906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became candidate at term 3"}
	{"level":"info","ts":"2024-08-02T18:49:18.457912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 received MsgVoteResp from 378cdee1d1b27193 at term 3"}
	{"level":"info","ts":"2024-08-02T18:49:18.457927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became leader at term 3"}
	{"level":"info","ts":"2024-08-02T18:49:18.457935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 378cdee1d1b27193 elected leader 378cdee1d1b27193 at term 3"}
	{"level":"info","ts":"2024-08-02T18:49:18.470179Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"378cdee1d1b27193","local-member-attributes":"{Name:default-k8s-diff-port-504903 ClientURLs:[https://192.168.61.183:2379]}","request-path":"/0/members/378cdee1d1b27193/attributes","cluster-id":"438aa8919cf6d084","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-02T18:49:18.470386Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:49:18.470398Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-02T18:49:18.470419Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-02T18:49:18.470861Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-02T18:49:18.472456Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-02T18:49:18.472627Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.183:2379"}
	{"level":"info","ts":"2024-08-02T18:49:40.144292Z","caller":"traceutil/trace.go:171","msg":"trace[1112384866] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"157.742312ms","start":"2024-08-02T18:49:39.986521Z","end":"2024-08-02T18:49:40.144264Z","steps":["trace[1112384866] 'process raft request'  (duration: 157.484476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:50:32.977507Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.814583ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-pw5tt\" ","response":"range_response_count:1 size:4293"}
	{"level":"info","ts":"2024-08-02T18:50:32.977654Z","caller":"traceutil/trace.go:171","msg":"trace[1999436157] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-pw5tt; range_end:; response_count:1; response_revision:642; }","duration":"206.042121ms","start":"2024-08-02T18:50:32.771576Z","end":"2024-08-02T18:50:32.977618Z","steps":["trace[1999436157] 'range keys from in-memory index tree'  (duration: 205.673533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:51:21.534462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.81836ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8184044464772246252 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:687 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-02T18:51:21.53479Z","caller":"traceutil/trace.go:171","msg":"trace[1917900268] linearizableReadLoop","detail":"{readStateIndex:756; appliedIndex:755; }","duration":"265.954844ms","start":"2024-08-02T18:51:21.268818Z","end":"2024-08-02T18:51:21.534772Z","steps":["trace[1917900268] 'read index received'  (duration: 6.956384ms)","trace[1917900268] 'applied index is now lower than readState.Index'  (duration: 258.99734ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T18:51:21.534849Z","caller":"traceutil/trace.go:171","msg":"trace[496878017] transaction","detail":"{read_only:false; response_revision:689; number_of_response:1; }","duration":"457.853671ms","start":"2024-08-02T18:51:21.076981Z","end":"2024-08-02T18:51:21.534835Z","steps":["trace[496878017] 'process raft request'  (duration: 198.840598ms)","trace[496878017] 'compare'  (duration: 257.533499ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T18:51:21.534986Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.160503ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-569cc877fc-pw5tt\" ","response":"range_response_count:1 size:4249"}
	{"level":"info","ts":"2024-08-02T18:51:21.535061Z","caller":"traceutil/trace.go:171","msg":"trace[2083572994] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-569cc877fc-pw5tt; range_end:; response_count:1; response_revision:689; }","duration":"266.261536ms","start":"2024-08-02T18:51:21.268792Z","end":"2024-08-02T18:51:21.535054Z","steps":["trace[2083572994] 'agreement among raft nodes before linearized reading'  (duration: 266.086112ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T18:51:21.534973Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T18:51:21.076971Z","time spent":"457.955932ms","remote":"127.0.0.1:51598","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:687 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-02T18:59:18.499043Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":834}
	{"level":"info","ts":"2024-08-02T18:59:18.508878Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":834,"took":"9.488124ms","hash":2589935635,"current-db-size-bytes":2625536,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2625536,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-08-02T18:59:18.508923Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2589935635,"revision":834,"compact-revision":-1}
	
	
	==> kernel <==
	 19:02:47 up 13 min,  0 users,  load average: 0.10, 0.11, 0.09
	Linux default-k8s-diff-port-504903 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8a3b871f33afdf83833630439428d1277d90afbaa6a2c7823f3480c7848ea02e] <==
	I0802 18:57:20.708649       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 18:59:19.711418       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 18:59:19.711707       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0802 18:59:20.711916       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 18:59:20.712002       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 18:59:20.712010       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 18:59:20.711933       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 18:59:20.712084       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 18:59:20.713139       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:00:20.712767       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:00:20.712855       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 19:00:20.712874       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:00:20.713868       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:00:20.713930       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 19:00:20.713960       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:02:20.713360       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:02:20.713552       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 19:02:20.713561       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:02:20.714470       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:02:20.714564       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 19:02:20.714594       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [071fbeaa4252c36bc433759764d2b31fdf184811455485c16dce8eec63263537] <==
	I0802 18:57:03.086863       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 18:57:32.605814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 18:57:33.096088       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 18:58:02.611244       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 18:58:03.103111       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 18:58:32.616361       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 18:58:33.123161       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 18:59:02.620891       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 18:59:03.135845       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 18:59:32.625101       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 18:59:33.142938       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:00:02.629789       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:00:03.149992       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0802 19:00:20.922661       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="341.724µs"
	I0802 19:00:31.919988       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="155.052µs"
	E0802 19:00:32.636728       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:00:33.159279       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:01:02.641479       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:01:03.173266       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:01:32.646540       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:01:33.180515       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:02:02.651624       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:02:03.187939       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:02:32.657433       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:02:33.200854       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1d9090ed318c1c60d350f923d25db10bce0c8c36bbd2209d04cafd353cce67e7] <==
	I0802 18:49:20.456542       1 server_linux.go:69] "Using iptables proxy"
	I0802 18:49:20.476986       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.183"]
	I0802 18:49:20.506935       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 18:49:20.506991       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 18:49:20.507045       1 server_linux.go:165] "Using iptables Proxier"
	I0802 18:49:20.509314       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 18:49:20.509576       1 server.go:872] "Version info" version="v1.30.3"
	I0802 18:49:20.509599       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:49:20.510982       1 config.go:192] "Starting service config controller"
	I0802 18:49:20.511018       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 18:49:20.511078       1 config.go:101] "Starting endpoint slice config controller"
	I0802 18:49:20.511103       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 18:49:20.511698       1 config.go:319] "Starting node config controller"
	I0802 18:49:20.511719       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 18:49:20.612150       1 shared_informer.go:320] Caches are synced for node config
	I0802 18:49:20.612236       1 shared_informer.go:320] Caches are synced for service config
	I0802 18:49:20.612256       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [54fef22170bccce8cbbe2f21c3857b13d0679e863ad490238c25659e8cd61194] <==
	I0802 18:49:17.897047       1 serving.go:380] Generated self-signed cert in-memory
	W0802 18:49:19.669727       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 18:49:19.669804       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 18:49:19.669832       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 18:49:19.669855       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 18:49:19.712907       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0802 18:49:19.715248       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:49:19.716882       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0802 18:49:19.717318       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 18:49:19.720017       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 18:49:19.717341       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 18:49:19.820487       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 19:00:14 default-k8s-diff-port-504903 kubelet[928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 19:00:14 default-k8s-diff-port-504903 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 19:00:14 default-k8s-diff-port-504903 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:00:20 default-k8s-diff-port-504903 kubelet[928]: E0802 19:00:20.906535     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:00:31 default-k8s-diff-port-504903 kubelet[928]: E0802 19:00:31.905429     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:00:44 default-k8s-diff-port-504903 kubelet[928]: E0802 19:00:44.905636     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:00:58 default-k8s-diff-port-504903 kubelet[928]: E0802 19:00:58.907283     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:01:12 default-k8s-diff-port-504903 kubelet[928]: E0802 19:01:12.905322     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:01:14 default-k8s-diff-port-504903 kubelet[928]: E0802 19:01:14.919995     928 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 19:01:14 default-k8s-diff-port-504903 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 19:01:14 default-k8s-diff-port-504903 kubelet[928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 19:01:14 default-k8s-diff-port-504903 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 19:01:14 default-k8s-diff-port-504903 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:01:23 default-k8s-diff-port-504903 kubelet[928]: E0802 19:01:23.906283     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:01:34 default-k8s-diff-port-504903 kubelet[928]: E0802 19:01:34.906012     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:01:49 default-k8s-diff-port-504903 kubelet[928]: E0802 19:01:49.905576     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:02:02 default-k8s-diff-port-504903 kubelet[928]: E0802 19:02:02.906934     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:02:14 default-k8s-diff-port-504903 kubelet[928]: E0802 19:02:14.921543     928 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 19:02:14 default-k8s-diff-port-504903 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 19:02:14 default-k8s-diff-port-504903 kubelet[928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 19:02:14 default-k8s-diff-port-504903 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 19:02:14 default-k8s-diff-port-504903 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:02:17 default-k8s-diff-port-504903 kubelet[928]: E0802 19:02:17.905914     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:02:31 default-k8s-diff-port-504903 kubelet[928]: E0802 19:02:31.905172     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:02:46 default-k8s-diff-port-504903 kubelet[928]: E0802 19:02:46.906876     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	
	
	==> storage-provisioner [98515615127ff0a1a90381d1a238540b1929d298f4caf66692b3949cef1fda31] <==
	I0802 18:49:51.191818       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 18:49:51.203852       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 18:49:51.204026       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 18:50:08.603559       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 18:50:08.603868       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-504903_ea1fce85-6406-46cf-a5bd-3ce2babaf85a!
	I0802 18:50:08.607894       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6e1ecbda-1987-4cdb-b2df-9966436f5718", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-504903_ea1fce85-6406-46cf-a5bd-3ce2babaf85a became leader
	I0802 18:50:08.704172       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-504903_ea1fce85-6406-46cf-a5bd-3ce2babaf85a!
	
	
	==> storage-provisioner [ead1da5f29baa20b541ab5bcdbe966c3ec0c229d7da11b5030d116076811c462] <==
	I0802 18:49:20.438285       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0802 18:49:50.441720       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-504903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-pw5tt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-504903 describe pod metrics-server-569cc877fc-pw5tt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-504903 describe pod metrics-server-569cc877fc-pw5tt: exit status 1 (60.497609ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-pw5tt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-504903 describe pod metrics-server-569cc877fc-pw5tt: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757654 -n embed-certs-757654
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757654 -n embed-certs-757654: exit status 3 (3.167815066s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:54:58.043480   63159 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.74:22: connect: no route to host
	E0802 18:54:58.043507   63159 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.74:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-757654 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-757654 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152308783s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.74:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-757654 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757654 -n embed-certs-757654
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757654 -n embed-certs-757654: exit status 3 (3.063481611s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 18:55:07.259670   63240 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.74:22: connect: no route to host
	E0802 18:55:07.259716   63240 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.74:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-757654" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
E0802 18:57:43.928048   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
E0802 19:00:14.261686   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
E0802 19:02:43.927276   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
E0802 19:05:14.261707   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490984 -n old-k8s-version-490984
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 2 (217.159684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-490984" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 2 (215.39644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-490984 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-490984        | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407306                  | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490984             | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504903       | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:53 UTC |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:45 UTC |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-198962             | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-198962                  | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-198962 image list                           | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-684611 | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | disable-driver-mounts-684611                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-757654            | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC | 02 Aug 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-757654                 | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:55 UTC | 02 Aug 24 19:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:55:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:55:07.300822   63271 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:55:07.301073   63271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:55:07.301083   63271 out.go:304] Setting ErrFile to fd 2...
	I0802 18:55:07.301087   63271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:55:07.301311   63271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:55:07.301870   63271 out.go:298] Setting JSON to false
	I0802 18:55:07.302787   63271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5851,"bootTime":1722619056,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:55:07.302842   63271 start.go:139] virtualization: kvm guest
	I0802 18:55:07.305206   63271 out.go:177] * [embed-certs-757654] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:55:07.306647   63271 notify.go:220] Checking for updates...
	I0802 18:55:07.306680   63271 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:55:07.308191   63271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:55:07.309618   63271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:55:07.310900   63271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:55:07.312292   63271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:55:07.313676   63271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:55:07.315371   63271 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:55:07.315804   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.315868   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.330686   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0802 18:55:07.331071   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.331554   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.331573   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.331865   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.332028   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.332279   63271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:55:07.332554   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.332586   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.348583   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0802 18:55:07.349036   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.349454   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.349479   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.349841   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.350094   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.386562   63271 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:55:07.387914   63271 start.go:297] selected driver: kvm2
	I0802 18:55:07.387927   63271 start.go:901] validating driver "kvm2" against &{Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:55:07.388032   63271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:55:07.388727   63271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:55:07.388793   63271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:55:07.403061   63271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:55:07.403460   63271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:55:07.403517   63271 cni.go:84] Creating CNI manager for ""
	I0802 18:55:07.403530   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:55:07.403564   63271 start.go:340] cluster config:
	{Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:55:07.403666   63271 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:55:07.405667   63271 out.go:177] * Starting "embed-certs-757654" primary control-plane node in "embed-certs-757654" cluster
	I0802 18:55:07.406842   63271 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:55:07.406881   63271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 18:55:07.406891   63271 cache.go:56] Caching tarball of preloaded images
	I0802 18:55:07.406977   63271 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:55:07.406989   63271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 18:55:07.407139   63271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/config.json ...
	I0802 18:55:07.407354   63271 start.go:360] acquireMachinesLock for embed-certs-757654: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:55:07.407402   63271 start.go:364] duration metric: took 27.558µs to acquireMachinesLock for "embed-certs-757654"
	I0802 18:55:07.407419   63271 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:55:07.407426   63271 fix.go:54] fixHost starting: 
	I0802 18:55:07.407713   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.407759   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.421857   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0802 18:55:07.422321   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.422811   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.422834   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.423160   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.423321   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.423495   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 18:55:07.424925   63271 fix.go:112] recreateIfNeeded on embed-certs-757654: state=Running err=<nil>
	W0802 18:55:07.424950   63271 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:55:07.427128   63271 out.go:177] * Updating the running kvm2 "embed-certs-757654" VM ...
	I0802 18:55:07.428434   63271 machine.go:94] provisionDockerMachine start ...
	I0802 18:55:07.428462   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.428711   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 18:55:07.431558   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:55:07.432004   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 19:51:03 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 18:55:07.432035   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:55:07.432207   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 18:55:07.432412   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 18:55:07.432600   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 18:55:07.432774   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 18:55:07.432921   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 18:55:07.433139   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 18:55:07.433153   63271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:55:10.331372   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:13.403378   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:19.483421   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:22.555412   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:28.635392   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:31.711303   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:40.827373   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:43.899432   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:49.979406   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:53.051366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:59.131387   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:02.203356   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:08.283365   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:11.355399   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:17.435474   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:20.507366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:26.587339   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:29.659353   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:35.739335   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:38.811375   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:44.891395   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:47.963426   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:56.424677   58571 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0802 18:56:56.424763   58571 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0802 18:56:56.426349   58571 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0802 18:56:56.426400   58571 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:56:56.426486   58571 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:56:56.426574   58571 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:56:56.426653   58571 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:56:56.426705   58571 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:56:56.428652   58571 out.go:204]   - Generating certificates and keys ...
	I0802 18:56:56.428741   58571 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:56:56.428809   58571 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:56:56.428898   58571 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 18:56:56.428972   58571 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 18:56:56.429041   58571 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 18:56:56.429089   58571 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 18:56:56.429161   58571 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 18:56:56.429218   58571 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 18:56:56.429298   58571 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 18:56:56.429380   58571 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 18:56:56.429416   58571 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 18:56:56.429492   58571 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:56:56.429535   58571 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:56:56.429590   58571 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:56:56.429676   58571 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:56:56.429736   58571 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:56:56.429821   58571 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:56:56.429890   58571 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:56:56.429950   58571 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:56:56.430038   58571 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:56:56.431432   58571 out.go:204]   - Booting up control plane ...
	I0802 18:56:56.431529   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:56:56.431650   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:56:56.431737   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:56:56.431820   58571 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:56:56.432000   58571 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 18:56:56.432070   58571 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:56:56.432142   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432320   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432400   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432555   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432625   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432805   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432899   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433090   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433160   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433309   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433316   58571 kubeadm.go:310] 
	I0802 18:56:56.433357   58571 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0802 18:56:56.433389   58571 kubeadm.go:310] 		timed out waiting for the condition
	I0802 18:56:56.433395   58571 kubeadm.go:310] 
	I0802 18:56:56.433430   58571 kubeadm.go:310] 	This error is likely caused by:
	I0802 18:56:56.433471   58571 kubeadm.go:310] 		- The kubelet is not running
	I0802 18:56:56.433602   58571 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0802 18:56:56.433617   58571 kubeadm.go:310] 
	I0802 18:56:56.433748   58571 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0802 18:56:56.433805   58571 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0802 18:56:56.433854   58571 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0802 18:56:56.433863   58571 kubeadm.go:310] 
	I0802 18:56:56.433949   58571 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0802 18:56:56.434017   58571 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0802 18:56:56.434023   58571 kubeadm.go:310] 
	I0802 18:56:56.434150   58571 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0802 18:56:56.434225   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0802 18:56:56.434317   58571 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0802 18:56:56.434408   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0802 18:56:56.434422   58571 kubeadm.go:310] 
	I0802 18:56:56.434487   58571 kubeadm.go:394] duration metric: took 8m0.865897602s to StartCluster
	I0802 18:56:56.434534   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:56:56.434606   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:56:56.480531   58571 cri.go:89] found id: ""
	I0802 18:56:56.480556   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.480564   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:56:56.480570   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:56:56.480622   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:56:56.524218   58571 cri.go:89] found id: ""
	I0802 18:56:56.524249   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.524258   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:56:56.524264   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:56:56.524318   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:56:56.563951   58571 cri.go:89] found id: ""
	I0802 18:56:56.563977   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.563984   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:56:56.563990   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:56:56.564046   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:56:56.600511   58571 cri.go:89] found id: ""
	I0802 18:56:56.600533   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.600540   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:56:56.600545   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:56:56.600607   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:56:56.634000   58571 cri.go:89] found id: ""
	I0802 18:56:56.634024   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.634032   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:56:56.634038   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:56:56.634088   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:56:56.667317   58571 cri.go:89] found id: ""
	I0802 18:56:56.667345   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.667356   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:56:56.667364   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:56:56.667429   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:56:56.698619   58571 cri.go:89] found id: ""
	I0802 18:56:56.698646   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.698656   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:56:56.698664   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:56:56.698726   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:56:56.730196   58571 cri.go:89] found id: ""
	I0802 18:56:56.730222   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.730239   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:56:56.730253   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:56:56.730267   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:56:56.837916   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:56:56.837958   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:56:56.881210   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:56:56.881242   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:56:56.930673   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:56:56.930712   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:56:56.944039   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:56:56.944072   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:56:57.026441   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0802 18:56:57.026505   58571 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0802 18:56:57.026546   58571 out.go:239] * 
	W0802 18:56:57.026632   58571 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.026667   58571 out.go:239] * 
	W0802 18:56:57.027538   58571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:56:57.031093   58571 out.go:177] 
	W0802 18:56:57.032235   58571 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.032305   58571 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0802 18:56:57.032328   58571 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0802 18:56:57.033757   58571 out.go:177] 
	I0802 18:56:54.043379   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:57.115474   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:03.195366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:06.267441   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:12.347367   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:15.419454   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:21.499312   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:24.571479   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:30.651392   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:33.723367   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:39.803308   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:42.875410   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:48.959363   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:52.027390   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:58.107322   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:01.179384   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:07.259377   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:10.331445   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:16.411350   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:19.483337   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:25.563336   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:28.635436   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:34.715391   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:37.787412   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:43.867364   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:46.939415   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:53.019307   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:56.091325   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:02.171408   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:05.247378   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:11.323383   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:14.395379   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:20.475380   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:23.547337   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:29.627318   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:32.699366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:38.779353   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:41.851395   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:44.853138   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:59:44.853196   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 18:59:44.853510   63271 buildroot.go:166] provisioning hostname "embed-certs-757654"
	I0802 18:59:44.853536   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 18:59:44.853769   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 18:59:44.855229   63271 machine.go:97] duration metric: took 4m37.426779586s to provisionDockerMachine
	I0802 18:59:44.855272   63271 fix.go:56] duration metric: took 4m37.44784655s for fixHost
	I0802 18:59:44.855280   63271 start.go:83] releasing machines lock for "embed-certs-757654", held for 4m37.44786842s
	W0802 18:59:44.855294   63271 start.go:714] error starting host: provision: host is not running
	W0802 18:59:44.855364   63271 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0802 18:59:44.855373   63271 start.go:729] Will try again in 5 seconds ...
	I0802 18:59:49.856328   63271 start.go:360] acquireMachinesLock for embed-certs-757654: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:59:49.856452   63271 start.go:364] duration metric: took 63.536µs to acquireMachinesLock for "embed-certs-757654"
	I0802 18:59:49.856478   63271 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:59:49.856486   63271 fix.go:54] fixHost starting: 
	I0802 18:59:49.856795   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:59:49.856820   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:59:49.872503   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0802 18:59:49.872935   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:59:49.873429   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:59:49.873455   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:59:49.873775   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:59:49.874015   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:59:49.874138   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 18:59:49.875790   63271 fix.go:112] recreateIfNeeded on embed-certs-757654: state=Stopped err=<nil>
	I0802 18:59:49.875812   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	W0802 18:59:49.875968   63271 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:59:49.877961   63271 out.go:177] * Restarting existing kvm2 VM for "embed-certs-757654" ...
	I0802 18:59:49.879469   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Start
	I0802 18:59:49.879683   63271 main.go:141] libmachine: (embed-certs-757654) Ensuring networks are active...
	I0802 18:59:49.880355   63271 main.go:141] libmachine: (embed-certs-757654) Ensuring network default is active
	I0802 18:59:49.880655   63271 main.go:141] libmachine: (embed-certs-757654) Ensuring network mk-embed-certs-757654 is active
	I0802 18:59:49.881013   63271 main.go:141] libmachine: (embed-certs-757654) Getting domain xml...
	I0802 18:59:49.881644   63271 main.go:141] libmachine: (embed-certs-757654) Creating domain...
	I0802 18:59:51.107468   63271 main.go:141] libmachine: (embed-certs-757654) Waiting to get IP...
	I0802 18:59:51.108364   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.108809   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.108870   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.108788   64474 retry.go:31] will retry after 219.792683ms: waiting for machine to come up
	I0802 18:59:51.330264   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.330775   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.330798   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.330741   64474 retry.go:31] will retry after 346.067172ms: waiting for machine to come up
	I0802 18:59:51.677951   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.678462   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.678504   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.678436   64474 retry.go:31] will retry after 313.108863ms: waiting for machine to come up
	I0802 18:59:51.992934   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.993410   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.993439   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.993354   64474 retry.go:31] will retry after 427.090188ms: waiting for machine to come up
	I0802 18:59:52.421609   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:52.422050   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:52.422080   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:52.422014   64474 retry.go:31] will retry after 577.531979ms: waiting for machine to come up
	I0802 18:59:53.000756   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:53.001336   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:53.001366   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:53.001280   64474 retry.go:31] will retry after 808.196796ms: waiting for machine to come up
	I0802 18:59:53.811289   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:53.811650   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:53.811674   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:53.811600   64474 retry.go:31] will retry after 906.307667ms: waiting for machine to come up
	I0802 18:59:54.720008   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:54.720637   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:54.720667   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:54.720586   64474 retry.go:31] will retry after 951.768859ms: waiting for machine to come up
	I0802 18:59:55.674137   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:55.674555   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:55.674599   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:55.674505   64474 retry.go:31] will retry after 1.653444272s: waiting for machine to come up
	I0802 18:59:57.329527   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:57.329936   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:57.329962   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:57.329899   64474 retry.go:31] will retry after 1.517025614s: waiting for machine to come up
	I0802 18:59:58.848461   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:58.848947   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:58.848991   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:58.848907   64474 retry.go:31] will retry after 1.930384725s: waiting for machine to come up
	I0802 19:00:00.781462   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:00.781935   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 19:00:00.781965   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 19:00:00.781892   64474 retry.go:31] will retry after 3.609517872s: waiting for machine to come up
	I0802 19:00:04.395801   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:04.396325   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 19:00:04.396353   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 19:00:04.396283   64474 retry.go:31] will retry after 4.053197681s: waiting for machine to come up
	I0802 19:00:08.453545   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.454111   63271 main.go:141] libmachine: (embed-certs-757654) Found IP for machine: 192.168.72.74
	I0802 19:00:08.454144   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has current primary IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.454154   63271 main.go:141] libmachine: (embed-certs-757654) Reserving static IP address...
	I0802 19:00:08.454669   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "embed-certs-757654", mac: "52:54:00:d5:0f:4c", ip: "192.168.72.74"} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.454695   63271 main.go:141] libmachine: (embed-certs-757654) DBG | skip adding static IP to network mk-embed-certs-757654 - found existing host DHCP lease matching {name: "embed-certs-757654", mac: "52:54:00:d5:0f:4c", ip: "192.168.72.74"}
	I0802 19:00:08.454709   63271 main.go:141] libmachine: (embed-certs-757654) Reserved static IP address: 192.168.72.74
	I0802 19:00:08.454723   63271 main.go:141] libmachine: (embed-certs-757654) Waiting for SSH to be available...
	I0802 19:00:08.454741   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Getting to WaitForSSH function...
	I0802 19:00:08.457106   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.457426   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.457477   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.457594   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Using SSH client type: external
	I0802 19:00:08.457622   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa (-rw-------)
	I0802 19:00:08.457655   63271 main.go:141] libmachine: (embed-certs-757654) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 19:00:08.457671   63271 main.go:141] libmachine: (embed-certs-757654) DBG | About to run SSH command:
	I0802 19:00:08.457689   63271 main.go:141] libmachine: (embed-certs-757654) DBG | exit 0
	I0802 19:00:08.583153   63271 main.go:141] libmachine: (embed-certs-757654) DBG | SSH cmd err, output: <nil>: 
	I0802 19:00:08.583546   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetConfigRaw
	I0802 19:00:08.584156   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:08.586987   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.587373   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.587403   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.587628   63271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/config.json ...
	I0802 19:00:08.587836   63271 machine.go:94] provisionDockerMachine start ...
	I0802 19:00:08.587858   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:08.588062   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.590424   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.590765   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.590790   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.590889   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:08.591079   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.591258   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.591427   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:08.591610   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:08.591800   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:08.591815   63271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 19:00:08.699598   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0802 19:00:08.699631   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 19:00:08.699874   63271 buildroot.go:166] provisioning hostname "embed-certs-757654"
	I0802 19:00:08.699905   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 19:00:08.700064   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.702828   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.703221   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.703250   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.703426   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:08.703600   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.703751   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.703891   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:08.704036   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:08.704249   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:08.704267   63271 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-757654 && echo "embed-certs-757654" | sudo tee /etc/hostname
	I0802 19:00:08.825824   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-757654
	
	I0802 19:00:08.825854   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.828688   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.829029   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.829059   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.829236   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:08.829456   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.829603   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.829752   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:08.829933   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:08.830107   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:08.830124   63271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-757654' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-757654/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-757654' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 19:00:08.949050   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:00:08.949088   63271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 19:00:08.949109   63271 buildroot.go:174] setting up certificates
	I0802 19:00:08.949117   63271 provision.go:84] configureAuth start
	I0802 19:00:08.949135   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 19:00:08.949433   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:08.952237   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.952545   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.952573   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.952723   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.954970   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.955440   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.955468   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.955644   63271 provision.go:143] copyHostCerts
	I0802 19:00:08.955696   63271 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 19:00:08.955706   63271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 19:00:08.955801   63271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 19:00:08.955926   63271 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 19:00:08.955939   63271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 19:00:08.955970   63271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 19:00:08.956043   63271 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 19:00:08.956051   63271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 19:00:08.956074   63271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 19:00:08.956136   63271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.embed-certs-757654 san=[127.0.0.1 192.168.72.74 embed-certs-757654 localhost minikube]
	I0802 19:00:09.274751   63271 provision.go:177] copyRemoteCerts
	I0802 19:00:09.274811   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 19:00:09.274833   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.277417   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.277757   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.277782   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.277937   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.278139   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.278307   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.278429   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:09.360988   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 19:00:09.383169   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0802 19:00:09.406422   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 19:00:09.430412   63271 provision.go:87] duration metric: took 481.276691ms to configureAuth
	I0802 19:00:09.430474   63271 buildroot.go:189] setting minikube options for container-runtime
	I0802 19:00:09.430718   63271 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:00:09.430812   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.433678   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.434068   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.434097   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.434234   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.434458   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.434631   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.434768   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.434952   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:09.435197   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:09.435220   63271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 19:00:09.694497   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 19:00:09.694540   63271 machine.go:97] duration metric: took 1.10669177s to provisionDockerMachine
	I0802 19:00:09.694555   63271 start.go:293] postStartSetup for "embed-certs-757654" (driver="kvm2")
	I0802 19:00:09.694566   63271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 19:00:09.694586   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.694913   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 19:00:09.694938   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.697387   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.697722   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.697765   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.697828   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.698011   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.698159   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.698280   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:09.781383   63271 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 19:00:09.785521   63271 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 19:00:09.785555   63271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 19:00:09.785639   63271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 19:00:09.785760   63271 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 19:00:09.785891   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 19:00:09.796028   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:00:09.820115   63271 start.go:296] duration metric: took 125.544407ms for postStartSetup
	I0802 19:00:09.820156   63271 fix.go:56] duration metric: took 19.963670883s for fixHost
	I0802 19:00:09.820175   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.823086   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.823387   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.823427   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.823600   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.823881   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.824077   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.824217   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.824403   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:09.824616   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:09.824627   63271 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 19:00:09.931624   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722625209.908806442
	
	I0802 19:00:09.931652   63271 fix.go:216] guest clock: 1722625209.908806442
	I0802 19:00:09.931660   63271 fix.go:229] Guest: 2024-08-02 19:00:09.908806442 +0000 UTC Remote: 2024-08-02 19:00:09.82015998 +0000 UTC m=+302.554066499 (delta=88.646462ms)
	I0802 19:00:09.931680   63271 fix.go:200] guest clock delta is within tolerance: 88.646462ms
	I0802 19:00:09.931686   63271 start.go:83] releasing machines lock for "embed-certs-757654", held for 20.075223098s
	I0802 19:00:09.931706   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.931993   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:09.934694   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.935023   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.935067   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.935214   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.935703   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.935866   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.935961   63271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 19:00:09.936013   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.936079   63271 ssh_runner.go:195] Run: cat /version.json
	I0802 19:00:09.936100   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.938619   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.938973   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.938996   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.939017   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.939183   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.939346   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.939541   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.939546   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.939566   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.939733   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.939753   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:09.939839   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.939986   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.940143   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:10.060439   63271 ssh_runner.go:195] Run: systemctl --version
	I0802 19:00:10.066688   63271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 19:00:10.209783   63271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 19:00:10.215441   63271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 19:00:10.215530   63271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 19:00:10.230786   63271 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 19:00:10.230808   63271 start.go:495] detecting cgroup driver to use...
	I0802 19:00:10.230894   63271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 19:00:10.246480   63271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 19:00:10.260637   63271 docker.go:217] disabling cri-docker service (if available) ...
	I0802 19:00:10.260694   63271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 19:00:10.273890   63271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 19:00:10.286949   63271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 19:00:10.396045   63271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 19:00:10.558766   63271 docker.go:233] disabling docker service ...
	I0802 19:00:10.558830   63271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 19:00:10.572592   63271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 19:00:10.585221   63271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 19:00:10.711072   63271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 19:00:10.831806   63271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 19:00:10.853846   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 19:00:10.871644   63271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 19:00:10.871703   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.881356   63271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 19:00:10.881415   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.891537   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.901976   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.911415   63271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 19:00:10.921604   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.931914   63271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.948828   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.958456   63271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 19:00:10.967234   63271 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 19:00:10.967291   63271 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 19:00:10.980348   63271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 19:00:10.989378   63271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:00:11.105254   63271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 19:00:11.241019   63271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 19:00:11.241094   63271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 19:00:11.245512   63271 start.go:563] Will wait 60s for crictl version
	I0802 19:00:11.245560   63271 ssh_runner.go:195] Run: which crictl
	I0802 19:00:11.249126   63271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 19:00:11.287138   63271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 19:00:11.287233   63271 ssh_runner.go:195] Run: crio --version
	I0802 19:00:11.316821   63271 ssh_runner.go:195] Run: crio --version
	I0802 19:00:11.344756   63271 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 19:00:11.346052   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:11.348613   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:11.349012   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:11.349040   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:11.349288   63271 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0802 19:00:11.353165   63271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:00:11.364518   63271 kubeadm.go:883] updating cluster {Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 19:00:11.364682   63271 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:00:11.364743   63271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:00:11.399565   63271 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 19:00:11.399667   63271 ssh_runner.go:195] Run: which lz4
	I0802 19:00:11.403250   63271 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 19:00:11.406951   63271 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 19:00:11.406982   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 19:00:12.658177   63271 crio.go:462] duration metric: took 1.254950494s to copy over tarball
	I0802 19:00:12.658258   63271 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 19:00:14.794602   63271 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.136306374s)
	I0802 19:00:14.794636   63271 crio.go:469] duration metric: took 2.136431079s to extract the tarball
	I0802 19:00:14.794644   63271 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 19:00:14.831660   63271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:00:14.871909   63271 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 19:00:14.871931   63271 cache_images.go:84] Images are preloaded, skipping loading
	I0802 19:00:14.871939   63271 kubeadm.go:934] updating node { 192.168.72.74 8443 v1.30.3 crio true true} ...
	I0802 19:00:14.872057   63271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-757654 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 19:00:14.872134   63271 ssh_runner.go:195] Run: crio config
	I0802 19:00:14.921874   63271 cni.go:84] Creating CNI manager for ""
	I0802 19:00:14.921937   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:00:14.921952   63271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 19:00:14.921978   63271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.74 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-757654 NodeName:embed-certs-757654 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 19:00:14.922146   63271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-757654"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 19:00:14.922224   63271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 19:00:14.931751   63271 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 19:00:14.931818   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 19:00:14.942115   63271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0802 19:00:14.959155   63271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 19:00:14.977137   63271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0802 19:00:14.994426   63271 ssh_runner.go:195] Run: grep 192.168.72.74	control-plane.minikube.internal$ /etc/hosts
	I0802 19:00:14.997882   63271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:00:15.009925   63271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:00:15.117317   63271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:00:15.133773   63271 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654 for IP: 192.168.72.74
	I0802 19:00:15.133798   63271 certs.go:194] generating shared ca certs ...
	I0802 19:00:15.133815   63271 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:00:15.133986   63271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 19:00:15.134036   63271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 19:00:15.134044   63271 certs.go:256] generating profile certs ...
	I0802 19:00:15.134174   63271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/client.key
	I0802 19:00:15.134268   63271 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/apiserver.key.edfbb872
	I0802 19:00:15.134321   63271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/proxy-client.key
	I0802 19:00:15.134471   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 19:00:15.134513   63271 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 19:00:15.134523   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 19:00:15.134559   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 19:00:15.134592   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 19:00:15.134629   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 19:00:15.134680   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:00:15.135580   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 19:00:15.166676   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 19:00:15.198512   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 19:00:15.222007   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 19:00:15.256467   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0802 19:00:15.282024   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 19:00:15.313750   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 19:00:15.336950   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 19:00:15.361688   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 19:00:15.385790   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 19:00:15.407897   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 19:00:15.432712   63271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 19:00:15.450086   63271 ssh_runner.go:195] Run: openssl version
	I0802 19:00:15.455897   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 19:00:15.466553   63271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 19:00:15.470703   63271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 19:00:15.470764   63271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 19:00:15.476433   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 19:00:15.486297   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 19:00:15.497188   63271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 19:00:15.501643   63271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 19:00:15.501712   63271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 19:00:15.507198   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 19:00:15.517747   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 19:00:15.528337   63271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:00:15.532658   63271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:00:15.532704   63271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:00:15.537982   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 19:00:15.547569   63271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 19:00:15.551539   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 19:00:15.556863   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 19:00:15.562004   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 19:00:15.567611   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 19:00:15.572837   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 19:00:15.577902   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 19:00:15.583126   63271 kubeadm.go:392] StartCluster: {Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:00:15.583255   63271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 19:00:15.583325   63271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 19:00:15.618245   63271 cri.go:89] found id: ""
	I0802 19:00:15.618324   63271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 19:00:15.627752   63271 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0802 19:00:15.627774   63271 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0802 19:00:15.627830   63271 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0802 19:00:15.636794   63271 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0802 19:00:15.637893   63271 kubeconfig.go:125] found "embed-certs-757654" server: "https://192.168.72.74:8443"
	I0802 19:00:15.640011   63271 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0802 19:00:15.649091   63271 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.74
	I0802 19:00:15.649122   63271 kubeadm.go:1160] stopping kube-system containers ...
	I0802 19:00:15.649135   63271 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0802 19:00:15.649199   63271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 19:00:15.688167   63271 cri.go:89] found id: ""
	I0802 19:00:15.688231   63271 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0802 19:00:15.707188   63271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 19:00:15.717501   63271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 19:00:15.717523   63271 kubeadm.go:157] found existing configuration files:
	
	I0802 19:00:15.717564   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 19:00:15.726600   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 19:00:15.726648   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 19:00:15.736483   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 19:00:15.745075   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 19:00:15.745137   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 19:00:15.754027   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 19:00:15.762600   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 19:00:15.762650   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 19:00:15.771220   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 19:00:15.779384   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 19:00:15.779450   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 19:00:15.788081   63271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 19:00:15.796772   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:15.902347   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.011025   63271 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108635171s)
	I0802 19:00:17.011068   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.229454   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.302558   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.405239   63271 api_server.go:52] waiting for apiserver process to appear ...
	I0802 19:00:17.405325   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:17.905496   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:18.405716   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:18.906507   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:19.405762   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:19.905447   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:19.920906   63271 api_server.go:72] duration metric: took 2.515676455s to wait for apiserver process to appear ...
	I0802 19:00:19.920938   63271 api_server.go:88] waiting for apiserver healthz status ...
	I0802 19:00:19.920965   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.287856   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0802 19:00:22.287881   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0802 19:00:22.287893   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.328293   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0802 19:00:22.328340   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0802 19:00:22.421484   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.426448   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 19:00:22.426493   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 19:00:22.921227   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.925796   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 19:00:22.925830   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 19:00:23.421392   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:23.426450   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 19:00:23.426474   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 19:00:23.921015   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:23.925369   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 200:
	ok
	I0802 19:00:23.931827   63271 api_server.go:141] control plane version: v1.30.3
	I0802 19:00:23.931850   63271 api_server.go:131] duration metric: took 4.010904656s to wait for apiserver health ...
	I0802 19:00:23.931860   63271 cni.go:84] Creating CNI manager for ""
	I0802 19:00:23.931869   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:00:23.933936   63271 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 19:00:23.935422   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 19:00:23.946751   63271 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 19:00:23.965059   63271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 19:00:23.976719   63271 system_pods.go:59] 8 kube-system pods found
	I0802 19:00:23.976770   63271 system_pods.go:61] "coredns-7db6d8ff4d-dldmc" [fd66a301-73a8-4c3a-9a3c-813d9940c233] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:00:23.976782   63271 system_pods.go:61] "etcd-embed-certs-757654" [5644c343-74c1-4b35-8700-0f75991c1227] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0802 19:00:23.976793   63271 system_pods.go:61] "kube-apiserver-embed-certs-757654" [726eda65-25be-4f4d-9322-e8c285df16b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0802 19:00:23.976801   63271 system_pods.go:61] "kube-controller-manager-embed-certs-757654" [aa23470d-fb61-4a05-ad70-afa56cb3439c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0802 19:00:23.976808   63271 system_pods.go:61] "kube-proxy-k8lnc" [8cedcb95-3796-4c88-9980-74f75e1240f6] Running
	I0802 19:00:23.976816   63271 system_pods.go:61] "kube-scheduler-embed-certs-757654" [1f3f3c29-c680-44d8-8d6f-76a6d5f99eca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0802 19:00:23.976824   63271 system_pods.go:61] "metrics-server-569cc877fc-8nfts" [fed56acf-7b52-4414-a3cd-003d769368a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0802 19:00:23.976830   63271 system_pods.go:61] "storage-provisioner" [b9e24584-d431-431e-a0ce-4e10c8ed28e7] Running
	I0802 19:00:23.976842   63271 system_pods.go:74] duration metric: took 11.758424ms to wait for pod list to return data ...
	I0802 19:00:23.976851   63271 node_conditions.go:102] verifying NodePressure condition ...
	I0802 19:00:23.980046   63271 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 19:00:23.980077   63271 node_conditions.go:123] node cpu capacity is 2
	I0802 19:00:23.980091   63271 node_conditions.go:105] duration metric: took 3.224494ms to run NodePressure ...
	I0802 19:00:23.980110   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:24.244478   63271 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0802 19:00:24.248352   63271 kubeadm.go:739] kubelet initialised
	I0802 19:00:24.248371   63271 kubeadm.go:740] duration metric: took 3.863328ms waiting for restarted kubelet to initialise ...
	I0802 19:00:24.248380   63271 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:00:24.260573   63271 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:26.266305   63271 pod_ready.go:102] pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:28.267770   63271 pod_ready.go:92] pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:28.267794   63271 pod_ready.go:81] duration metric: took 4.007193958s for pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:28.267804   63271 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:30.281164   63271 pod_ready.go:102] pod "etcd-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:30.775554   63271 pod_ready.go:92] pod "etcd-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:30.775577   63271 pod_ready.go:81] duration metric: took 2.507766234s for pod "etcd-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:30.775587   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:31.280678   63271 pod_ready.go:92] pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:31.280706   63271 pod_ready.go:81] duration metric: took 505.111529ms for pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:31.280718   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:33.285821   63271 pod_ready.go:102] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:35.786849   63271 pod_ready.go:102] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:37.787600   63271 pod_ready.go:102] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:38.286212   63271 pod_ready.go:92] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:38.286238   63271 pod_ready.go:81] duration metric: took 7.005511802s for pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.286251   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k8lnc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.290785   63271 pod_ready.go:92] pod "kube-proxy-k8lnc" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:38.290808   63271 pod_ready.go:81] duration metric: took 4.549071ms for pod "kube-proxy-k8lnc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.290819   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.294795   63271 pod_ready.go:92] pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:38.294818   63271 pod_ready.go:81] duration metric: took 3.989197ms for pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.294827   63271 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:40.301046   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:42.800745   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:45.300974   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:47.301922   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:49.800527   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:51.801849   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:54.301458   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:56.801027   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:59.300566   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:01.301544   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:03.801351   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:05.801445   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:08.300706   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:10.801090   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:13.302416   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:15.801900   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:18.301115   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:20.801699   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:23.301191   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:25.801392   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:28.300859   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:30.303055   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:32.801185   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:35.300663   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:37.800850   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:39.801554   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:42.299824   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:44.300915   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:46.301116   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:48.801022   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:50.801265   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:53.301815   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:55.804154   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:58.306260   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:00.800350   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:02.801306   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:04.801767   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:06.801850   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:09.300911   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:11.801540   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:13.801899   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:16.301139   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:18.801264   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:20.801310   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:22.801602   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:25.300418   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:27.800576   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:29.801107   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:32.300367   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:34.301544   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:36.800348   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:38.800863   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:41.301210   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:43.800898   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:45.801495   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:47.802115   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:50.300758   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:52.800119   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:54.800742   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:57.300894   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:59.301967   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:01.801753   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:04.300020   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:06.301903   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:08.801102   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:10.801655   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:13.301099   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:15.307703   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:17.800572   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:19.800718   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:21.801336   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:23.806594   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:26.300529   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:28.301514   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:30.801418   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:33.300343   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:35.301005   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:37.302055   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:39.800705   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:41.801159   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:43.801333   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:45.801519   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:47.803743   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:50.301107   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:52.302310   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:54.801379   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:56.802698   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:59.300329   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:01.302266   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:03.801942   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:06.302523   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:08.800574   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:10.802039   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:12.802886   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:15.307009   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:17.803399   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:20.303980   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:22.801487   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:25.300731   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:27.301890   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:29.801312   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:32.299843   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:34.300651   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:36.301491   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:38.294999   63271 pod_ready.go:81] duration metric: took 4m0.000155688s for pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace to be "Ready" ...
	E0802 19:04:38.295040   63271 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace to be "Ready" (will not retry!)
	I0802 19:04:38.295060   63271 pod_ready.go:38] duration metric: took 4m14.04667112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:04:38.295085   63271 kubeadm.go:597] duration metric: took 4m22.667305395s to restartPrimaryControlPlane
	W0802 19:04:38.295180   63271 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0802 19:04:38.295215   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0802 19:05:09.113784   63271 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.818542247s)
	I0802 19:05:09.113872   63271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:05:09.132652   63271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 19:05:09.151560   63271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 19:05:09.161782   63271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 19:05:09.161805   63271 kubeadm.go:157] found existing configuration files:
	
	I0802 19:05:09.161852   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 19:05:09.170533   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 19:05:09.170597   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 19:05:09.179443   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 19:05:09.187823   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 19:05:09.187874   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 19:05:09.196537   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 19:05:09.204923   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 19:05:09.204971   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 19:05:09.213510   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 19:05:09.221920   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 19:05:09.221977   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 19:05:09.230545   63271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 19:05:09.279115   63271 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 19:05:09.279216   63271 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 19:05:09.421011   63271 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 19:05:09.421143   63271 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 19:05:09.421309   63271 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 19:05:09.622157   63271 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 19:05:09.624863   63271 out.go:204]   - Generating certificates and keys ...
	I0802 19:05:09.624938   63271 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 19:05:09.625017   63271 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 19:05:09.625115   63271 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 19:05:09.625212   63271 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 19:05:09.625309   63271 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 19:05:09.625401   63271 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 19:05:09.625486   63271 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 19:05:09.625571   63271 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 19:05:09.626114   63271 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 19:05:09.626203   63271 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 19:05:09.626241   63271 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 19:05:09.626289   63271 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 19:05:09.822713   63271 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 19:05:10.181638   63271 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 19:05:10.512424   63271 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 19:05:10.714859   63271 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 19:05:10.884498   63271 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 19:05:10.885164   63271 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 19:05:10.887815   63271 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 19:05:10.889716   63271 out.go:204]   - Booting up control plane ...
	I0802 19:05:10.889837   63271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 19:05:10.889952   63271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 19:05:10.890264   63271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 19:05:10.909853   63271 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 19:05:10.910852   63271 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 19:05:10.910923   63271 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 19:05:11.036494   63271 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 19:05:11.036625   63271 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 19:05:11.538395   63271 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.975394ms
	I0802 19:05:11.538496   63271 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 19:05:16.040390   63271 kubeadm.go:310] [api-check] The API server is healthy after 4.501873699s
	I0802 19:05:16.052960   63271 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 19:05:16.071975   63271 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 19:05:16.097491   63271 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 19:05:16.097745   63271 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-757654 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 19:05:16.114782   63271 kubeadm.go:310] [bootstrap-token] Using token: 16dj5v.yumf7pzn1z6g3iqs
	I0802 19:05:16.115985   63271 out.go:204]   - Configuring RBAC rules ...
	I0802 19:05:16.116118   63271 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 19:05:16.120188   63271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 19:05:16.126277   63271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 19:05:16.128999   63271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 19:05:16.131913   63271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 19:05:16.137874   63271 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 19:05:16.448583   63271 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 19:05:16.887723   63271 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 19:05:17.446999   63271 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 19:05:17.448086   63271 kubeadm.go:310] 
	I0802 19:05:17.448166   63271 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 19:05:17.448179   63271 kubeadm.go:310] 
	I0802 19:05:17.448264   63271 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 19:05:17.448272   63271 kubeadm.go:310] 
	I0802 19:05:17.448308   63271 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 19:05:17.448401   63271 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 19:05:17.448471   63271 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 19:05:17.448481   63271 kubeadm.go:310] 
	I0802 19:05:17.448574   63271 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 19:05:17.448584   63271 kubeadm.go:310] 
	I0802 19:05:17.448647   63271 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 19:05:17.448657   63271 kubeadm.go:310] 
	I0802 19:05:17.448723   63271 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 19:05:17.448816   63271 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 19:05:17.448921   63271 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 19:05:17.448937   63271 kubeadm.go:310] 
	I0802 19:05:17.449030   63271 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 19:05:17.449105   63271 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 19:05:17.449111   63271 kubeadm.go:310] 
	I0802 19:05:17.449187   63271 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 16dj5v.yumf7pzn1z6g3iqs \
	I0802 19:05:17.449311   63271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 19:05:17.449357   63271 kubeadm.go:310] 	--control-plane 
	I0802 19:05:17.449366   63271 kubeadm.go:310] 
	I0802 19:05:17.449480   63271 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 19:05:17.449496   63271 kubeadm.go:310] 
	I0802 19:05:17.449581   63271 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 16dj5v.yumf7pzn1z6g3iqs \
	I0802 19:05:17.449681   63271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 19:05:17.450848   63271 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 19:05:17.450880   63271 cni.go:84] Creating CNI manager for ""
	I0802 19:05:17.450894   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:05:17.452619   63271 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 19:05:17.453986   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 19:05:17.465774   63271 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 19:05:17.490077   63271 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 19:05:17.490204   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:17.490227   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-757654 minikube.k8s.io/updated_at=2024_08_02T19_05_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=embed-certs-757654 minikube.k8s.io/primary=true
	I0802 19:05:17.667909   63271 ops.go:34] apiserver oom_adj: -16
	I0802 19:05:17.668050   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:18.169144   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:18.668337   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:19.168306   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:19.669016   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:20.168693   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:20.668360   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:21.169136   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:21.668931   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:22.168445   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:22.668373   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:23.168654   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:23.668818   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:24.168975   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:24.668943   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:25.168934   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:25.669051   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:26.169075   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:26.668512   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:27.168715   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:27.669044   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:28.169018   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:28.668155   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:29.169111   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:29.669117   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:30.168732   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:30.251617   63271 kubeadm.go:1113] duration metric: took 12.761473169s to wait for elevateKubeSystemPrivileges
	I0802 19:05:30.251659   63271 kubeadm.go:394] duration metric: took 5m14.668560428s to StartCluster
	I0802 19:05:30.251683   63271 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:05:30.251781   63271 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:05:30.253864   63271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:05:30.254120   63271 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:05:30.254228   63271 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 19:05:30.254286   63271 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-757654"
	I0802 19:05:30.254296   63271 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:05:30.254323   63271 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-757654"
	W0802 19:05:30.254333   63271 addons.go:243] addon storage-provisioner should already be in state true
	I0802 19:05:30.254351   63271 addons.go:69] Setting default-storageclass=true in profile "embed-certs-757654"
	I0802 19:05:30.254363   63271 addons.go:69] Setting metrics-server=true in profile "embed-certs-757654"
	I0802 19:05:30.254400   63271 addons.go:234] Setting addon metrics-server=true in "embed-certs-757654"
	I0802 19:05:30.254403   63271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-757654"
	W0802 19:05:30.254410   63271 addons.go:243] addon metrics-server should already be in state true
	I0802 19:05:30.254436   63271 host.go:66] Checking if "embed-certs-757654" exists ...
	I0802 19:05:30.254366   63271 host.go:66] Checking if "embed-certs-757654" exists ...
	I0802 19:05:30.254785   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.254820   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.254855   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.254884   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.254887   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.254928   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.256100   63271 out.go:177] * Verifying Kubernetes components...
	I0802 19:05:30.257487   63271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:05:30.270795   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46875
	I0802 19:05:30.271280   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37947
	I0802 19:05:30.271505   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.271784   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.272204   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.272229   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.272368   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.272401   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.272592   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.272737   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.273157   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I0802 19:05:30.273182   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.273226   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.273354   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.273386   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.273519   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.273996   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.274026   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.274365   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.274563   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 19:05:30.278582   63271 addons.go:234] Setting addon default-storageclass=true in "embed-certs-757654"
	W0802 19:05:30.278609   63271 addons.go:243] addon default-storageclass should already be in state true
	I0802 19:05:30.278640   63271 host.go:66] Checking if "embed-certs-757654" exists ...
	I0802 19:05:30.279018   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.279059   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.290269   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0802 19:05:30.291002   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.291611   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.291631   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.291674   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41475
	I0802 19:05:30.292009   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.292112   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.292207   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 19:05:30.292748   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.292765   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.293075   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.293312   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 19:05:30.294125   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42109
	I0802 19:05:30.294477   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.295166   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.295190   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.295632   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.296279   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.296442   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:05:30.296487   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.296864   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:05:30.298655   63271 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0802 19:05:30.298658   63271 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 19:05:30.300094   63271 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0802 19:05:30.300112   63271 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0802 19:05:30.300133   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:05:30.300247   63271 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:05:30.300271   63271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 19:05:30.300294   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:05:30.304247   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.304746   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.304761   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:05:30.304783   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.305074   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:05:30.305142   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:05:30.305165   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.305413   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:05:30.305517   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:05:30.305629   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:05:30.305688   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:05:30.305850   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:05:30.305908   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:05:30.306275   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:05:30.317504   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I0802 19:05:30.317941   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.318474   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.318491   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.318858   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.319055   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 19:05:30.321556   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:05:30.321929   63271 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 19:05:30.321940   63271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 19:05:30.321955   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:05:30.325005   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.325489   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:05:30.325507   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.325710   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:05:30.325887   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:05:30.326077   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:05:30.326244   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:05:30.427644   63271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:05:30.447261   63271 node_ready.go:35] waiting up to 6m0s for node "embed-certs-757654" to be "Ready" ...
	I0802 19:05:30.455056   63271 node_ready.go:49] node "embed-certs-757654" has status "Ready":"True"
	I0802 19:05:30.455077   63271 node_ready.go:38] duration metric: took 7.781034ms for node "embed-certs-757654" to be "Ready" ...
	I0802 19:05:30.455088   63271 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:05:30.459517   63271 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.464549   63271 pod_ready.go:92] pod "etcd-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:05:30.464574   63271 pod_ready.go:81] duration metric: took 5.029953ms for pod "etcd-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.464583   63271 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.469443   63271 pod_ready.go:92] pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:05:30.469477   63271 pod_ready.go:81] duration metric: took 4.883324ms for pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.469492   63271 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.474900   63271 pod_ready.go:92] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:05:30.474924   63271 pod_ready.go:81] duration metric: took 5.424192ms for pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.474933   63271 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.481860   63271 pod_ready.go:92] pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:05:30.481880   63271 pod_ready.go:81] duration metric: took 6.940862ms for pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.481890   63271 pod_ready.go:38] duration metric: took 26.786983ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:05:30.481904   63271 api_server.go:52] waiting for apiserver process to appear ...
	I0802 19:05:30.481954   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:05:30.501252   63271 api_server.go:72] duration metric: took 247.089995ms to wait for apiserver process to appear ...
	I0802 19:05:30.501297   63271 api_server.go:88] waiting for apiserver healthz status ...
	I0802 19:05:30.501319   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:05:30.506521   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 200:
	ok
	I0802 19:05:30.507590   63271 api_server.go:141] control plane version: v1.30.3
	I0802 19:05:30.507613   63271 api_server.go:131] duration metric: took 6.307506ms to wait for apiserver health ...
	I0802 19:05:30.507622   63271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 19:05:30.559683   63271 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0802 19:05:30.559711   63271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0802 19:05:30.564451   63271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:05:30.617129   63271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 19:05:30.639218   63271 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0802 19:05:30.639250   63271 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0802 19:05:30.666665   63271 system_pods.go:59] 5 kube-system pods found
	I0802 19:05:30.666692   63271 system_pods.go:61] "etcd-embed-certs-757654" [b7bffd63-937a-4cd2-8eaa-33b93f526960] Running
	I0802 19:05:30.666697   63271 system_pods.go:61] "kube-apiserver-embed-certs-757654" [79a15028-c9b4-49e4-9e5a-bb1bfe2c303e] Running
	I0802 19:05:30.666700   63271 system_pods.go:61] "kube-controller-manager-embed-certs-757654" [7bfda970-108c-4494-b1e2-07f3a05e2d93] Running
	I0802 19:05:30.666705   63271 system_pods.go:61] "kube-proxy-8w67s" [b3d73c44-1601-4c2f-8399-259dbcd18813] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0802 19:05:30.666709   63271 system_pods.go:61] "kube-scheduler-embed-certs-757654" [aca4a7c4-4705-47df-982c-0ef501e67852] Running
	I0802 19:05:30.666717   63271 system_pods.go:74] duration metric: took 159.089874ms to wait for pod list to return data ...
	I0802 19:05:30.666724   63271 default_sa.go:34] waiting for default service account to be created ...
	I0802 19:05:30.702756   63271 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 19:05:30.702788   63271 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0802 19:05:30.751529   63271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 19:05:30.878159   63271 default_sa.go:45] found service account: "default"
	I0802 19:05:30.878187   63271 default_sa.go:55] duration metric: took 211.457433ms for default service account to be created ...
	I0802 19:05:30.878198   63271 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 19:05:31.060423   63271 system_pods.go:86] 7 kube-system pods found
	I0802 19:05:31.060453   63271 system_pods.go:89] "coredns-7db6d8ff4d-bm67n" [97410089-9b08-4ea7-9636-ce635935858f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.060461   63271 system_pods.go:89] "coredns-7db6d8ff4d-rfg9v" [1511162d-2bd2-490f-b789-925b904bd691] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.060466   63271 system_pods.go:89] "etcd-embed-certs-757654" [b7bffd63-937a-4cd2-8eaa-33b93f526960] Running
	I0802 19:05:31.060472   63271 system_pods.go:89] "kube-apiserver-embed-certs-757654" [79a15028-c9b4-49e4-9e5a-bb1bfe2c303e] Running
	I0802 19:05:31.060476   63271 system_pods.go:89] "kube-controller-manager-embed-certs-757654" [7bfda970-108c-4494-b1e2-07f3a05e2d93] Running
	I0802 19:05:31.060481   63271 system_pods.go:89] "kube-proxy-8w67s" [b3d73c44-1601-4c2f-8399-259dbcd18813] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0802 19:05:31.060485   63271 system_pods.go:89] "kube-scheduler-embed-certs-757654" [aca4a7c4-4705-47df-982c-0ef501e67852] Running
	I0802 19:05:31.060510   63271 retry.go:31] will retry after 244.863307ms: missing components: kube-dns, kube-proxy
	I0802 19:05:31.313026   63271 system_pods.go:86] 7 kube-system pods found
	I0802 19:05:31.313075   63271 system_pods.go:89] "coredns-7db6d8ff4d-bm67n" [97410089-9b08-4ea7-9636-ce635935858f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.313094   63271 system_pods.go:89] "coredns-7db6d8ff4d-rfg9v" [1511162d-2bd2-490f-b789-925b904bd691] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.313103   63271 system_pods.go:89] "etcd-embed-certs-757654" [b7bffd63-937a-4cd2-8eaa-33b93f526960] Running
	I0802 19:05:31.313111   63271 system_pods.go:89] "kube-apiserver-embed-certs-757654" [79a15028-c9b4-49e4-9e5a-bb1bfe2c303e] Running
	I0802 19:05:31.313119   63271 system_pods.go:89] "kube-controller-manager-embed-certs-757654" [7bfda970-108c-4494-b1e2-07f3a05e2d93] Running
	I0802 19:05:31.313130   63271 system_pods.go:89] "kube-proxy-8w67s" [b3d73c44-1601-4c2f-8399-259dbcd18813] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0802 19:05:31.313141   63271 system_pods.go:89] "kube-scheduler-embed-certs-757654" [aca4a7c4-4705-47df-982c-0ef501e67852] Running
	I0802 19:05:31.313162   63271 retry.go:31] will retry after 359.054186ms: missing components: kube-dns, kube-proxy
	I0802 19:05:31.476794   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:31.476831   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:31.476844   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:31.476881   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:31.477155   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:31.477211   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:31.477227   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:31.477235   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:31.477385   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Closing plugin on server side
	I0802 19:05:31.477404   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:31.477425   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:31.477437   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:31.477446   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:31.477466   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:31.477477   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:31.477651   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Closing plugin on server side
	I0802 19:05:31.477705   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:31.477718   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:31.503788   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:31.503817   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:31.504094   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:31.504110   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:31.504149   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Closing plugin on server side
	I0802 19:05:31.682829   63271 system_pods.go:86] 8 kube-system pods found
	I0802 19:05:31.682863   63271 system_pods.go:89] "coredns-7db6d8ff4d-bm67n" [97410089-9b08-4ea7-9636-ce635935858f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.682874   63271 system_pods.go:89] "coredns-7db6d8ff4d-rfg9v" [1511162d-2bd2-490f-b789-925b904bd691] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.682881   63271 system_pods.go:89] "etcd-embed-certs-757654" [b7bffd63-937a-4cd2-8eaa-33b93f526960] Running
	I0802 19:05:31.682888   63271 system_pods.go:89] "kube-apiserver-embed-certs-757654" [79a15028-c9b4-49e4-9e5a-bb1bfe2c303e] Running
	I0802 19:05:31.682896   63271 system_pods.go:89] "kube-controller-manager-embed-certs-757654" [7bfda970-108c-4494-b1e2-07f3a05e2d93] Running
	I0802 19:05:31.682904   63271 system_pods.go:89] "kube-proxy-8w67s" [b3d73c44-1601-4c2f-8399-259dbcd18813] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0802 19:05:31.682911   63271 system_pods.go:89] "kube-scheduler-embed-certs-757654" [aca4a7c4-4705-47df-982c-0ef501e67852] Running
	I0802 19:05:31.682920   63271 system_pods.go:89] "storage-provisioner" [d3300a13-9ee5-4eeb-9e21-9ef40aad1379] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0802 19:05:31.682958   63271 retry.go:31] will retry after 403.454792ms: missing components: kube-dns, kube-proxy
	I0802 19:05:32.029198   63271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.277620332s)
	I0802 19:05:32.029247   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:32.029262   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:32.029731   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:32.029756   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:32.029768   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:32.029778   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:32.029783   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Closing plugin on server side
	I0802 19:05:32.030050   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:32.030087   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:32.030101   63271 addons.go:475] Verifying addon metrics-server=true in "embed-certs-757654"
	I0802 19:05:32.032554   63271 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0802 19:05:32.033965   63271 addons.go:510] duration metric: took 1.779739471s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0802 19:05:32.110124   63271 system_pods.go:86] 9 kube-system pods found
	I0802 19:05:32.110154   63271 system_pods.go:89] "coredns-7db6d8ff4d-bm67n" [97410089-9b08-4ea7-9636-ce635935858f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:32.110161   63271 system_pods.go:89] "coredns-7db6d8ff4d-rfg9v" [1511162d-2bd2-490f-b789-925b904bd691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:32.110169   63271 system_pods.go:89] "etcd-embed-certs-757654" [b7bffd63-937a-4cd2-8eaa-33b93f526960] Running
	I0802 19:05:32.110174   63271 system_pods.go:89] "kube-apiserver-embed-certs-757654" [79a15028-c9b4-49e4-9e5a-bb1bfe2c303e] Running
	I0802 19:05:32.110179   63271 system_pods.go:89] "kube-controller-manager-embed-certs-757654" [7bfda970-108c-4494-b1e2-07f3a05e2d93] Running
	I0802 19:05:32.110183   63271 system_pods.go:89] "kube-proxy-8w67s" [b3d73c44-1601-4c2f-8399-259dbcd18813] Running
	I0802 19:05:32.110187   63271 system_pods.go:89] "kube-scheduler-embed-certs-757654" [aca4a7c4-4705-47df-982c-0ef501e67852] Running
	I0802 19:05:32.110193   63271 system_pods.go:89] "metrics-server-569cc877fc-d69sk" [4d7a8428-5611-44a4-93a7-4440735668f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0802 19:05:32.110198   63271 system_pods.go:89] "storage-provisioner" [d3300a13-9ee5-4eeb-9e21-9ef40aad1379] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0802 19:05:32.110205   63271 system_pods.go:126] duration metric: took 1.232002006s to wait for k8s-apps to be running ...
	I0802 19:05:32.110213   63271 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 19:05:32.110255   63271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:05:32.134588   63271 system_svc.go:56] duration metric: took 24.363295ms WaitForService to wait for kubelet
	I0802 19:05:32.134625   63271 kubeadm.go:582] duration metric: took 1.880469395s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:05:32.134649   63271 node_conditions.go:102] verifying NodePressure condition ...
	I0802 19:05:32.149396   63271 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 19:05:32.149432   63271 node_conditions.go:123] node cpu capacity is 2
	I0802 19:05:32.149449   63271 node_conditions.go:105] duration metric: took 14.794217ms to run NodePressure ...
	I0802 19:05:32.149465   63271 start.go:241] waiting for startup goroutines ...
	I0802 19:05:32.149477   63271 start.go:246] waiting for cluster config update ...
	I0802 19:05:32.149492   63271 start.go:255] writing updated cluster config ...
	I0802 19:05:32.149833   63271 ssh_runner.go:195] Run: rm -f paused
	I0802 19:05:32.199651   63271 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 19:05:32.201132   63271 out.go:177] * Done! kubectl is now configured to use "embed-certs-757654" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.254353923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625559254326755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce19ea3b-6d5e-4acf-9812-1546319d35b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.254945585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=beea2d08-bd33-45d6-a3ff-b8ba6157c7fa name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.254992957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=beea2d08-bd33-45d6-a3ff-b8ba6157c7fa name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.255033979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=beea2d08-bd33-45d6-a3ff-b8ba6157c7fa name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.292457630Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59520004-8800-47b7-a94d-b5c426bb50b1 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.292545183Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59520004-8800-47b7-a94d-b5c426bb50b1 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.294139186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e40b20d1-c37a-4ba3-89a7-de57d566b041 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.294528553Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625559294504268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e40b20d1-c37a-4ba3-89a7-de57d566b041 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.295043201Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=574f2f88-f4b1-422f-80a9-57dcb94cb4bd name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.295091401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=574f2f88-f4b1-422f-80a9-57dcb94cb4bd name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.295120186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=574f2f88-f4b1-422f-80a9-57dcb94cb4bd name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.324104016Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddd672f9-8b87-4ec5-b63e-e2527acd4fc2 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.324179914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddd672f9-8b87-4ec5-b63e-e2527acd4fc2 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.325347898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a2a4179-8973-4ab7-8815-681f6547a98c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.325832489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625559325798535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a2a4179-8973-4ab7-8815-681f6547a98c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.326316794Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=175216ef-7f79-41b6-bfb8-ef7b045fff1b name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.326363153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=175216ef-7f79-41b6-bfb8-ef7b045fff1b name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.326398908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=175216ef-7f79-41b6-bfb8-ef7b045fff1b name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.357355403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e3fdb9d-e688-4ae0-9008-d1f94c310dec name=/runtime.v1.RuntimeService/Version
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.357435211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e3fdb9d-e688-4ae0-9008-d1f94c310dec name=/runtime.v1.RuntimeService/Version
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.358379397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7cd5efc4-3094-410a-ba04-55e1f77e614a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.358798384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625559358767260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cd5efc4-3094-410a-ba04-55e1f77e614a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.359218243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6da6304-8562-4403-8c0f-a8df9ae63edf name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.359268278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6da6304-8562-4403-8c0f-a8df9ae63edf name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:05:59 old-k8s-version-490984 crio[651]: time="2024-08-02 19:05:59.359303232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c6da6304-8562-4403-8c0f-a8df9ae63edf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 2 18:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051059] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037584] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.690028] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.750688] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.557853] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.754585] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.059665] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060053] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.196245] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.132013] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.247678] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +5.903520] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.064556] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.958055] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[Aug 2 18:49] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 2 18:52] systemd-fstab-generator[4990]: Ignoring "noauto" option for root device
	[Aug 2 18:55] systemd-fstab-generator[5277]: Ignoring "noauto" option for root device
	[  +0.065921] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:05:59 up 17 min,  0 users,  load average: 0.02, 0.01, 0.00
	Linux old-k8s-version-490984 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000981830)
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]: goroutine 149 [select]:
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000967ef0, 0x4f0ac20, 0xc00042bef0, 0x1, 0xc0001020c0)
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000b602a0, 0xc0001020c0)
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009831f0, 0xc0009b3600)
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 02 19:05:56 old-k8s-version-490984 kubelet[6452]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 02 19:05:56 old-k8s-version-490984 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 02 19:05:56 old-k8s-version-490984 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 02 19:05:57 old-k8s-version-490984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Aug 02 19:05:57 old-k8s-version-490984 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 02 19:05:57 old-k8s-version-490984 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 02 19:05:57 old-k8s-version-490984 kubelet[6461]: I0802 19:05:57.569769    6461 server.go:416] Version: v1.20.0
	Aug 02 19:05:57 old-k8s-version-490984 kubelet[6461]: I0802 19:05:57.570142    6461 server.go:837] Client rotation is on, will bootstrap in background
	Aug 02 19:05:57 old-k8s-version-490984 kubelet[6461]: I0802 19:05:57.572249    6461 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 02 19:05:57 old-k8s-version-490984 kubelet[6461]: W0802 19:05:57.573215    6461 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 02 19:05:57 old-k8s-version-490984 kubelet[6461]: I0802 19:05:57.573273    6461 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490984 -n old-k8s-version-490984
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 2 (217.952004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-490984" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (541.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
E0802 19:03:17.306838   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.168:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.168:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306: exit status 2 (241.58404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "no-preload-407306" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-407306 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-407306 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (49.498523ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.39.168:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-407306 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306: exit status 2 (218.874267ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-407306 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490984             | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504903       | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:53 UTC |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:45 UTC |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-198962             | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-198962                  | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-198962 image list                           | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-684611 | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | disable-driver-mounts-684611                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-757654            | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC | 02 Aug 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-757654                 | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:55 UTC | 02 Aug 24 19:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC | 02 Aug 24 19:07 UTC |
	| start   | -p auto-800809 --memory=3072                           | auto-800809                  | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 19:07:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 19:07:52.387737   66688 out.go:291] Setting OutFile to fd 1 ...
	I0802 19:07:52.388015   66688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:07:52.388029   66688 out.go:304] Setting ErrFile to fd 2...
	I0802 19:07:52.388035   66688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:07:52.388298   66688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 19:07:52.388891   66688 out.go:298] Setting JSON to false
	I0802 19:07:52.389794   66688 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6616,"bootTime":1722619056,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 19:07:52.389852   66688 start.go:139] virtualization: kvm guest
	I0802 19:07:52.392168   66688 out.go:177] * [auto-800809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 19:07:52.393615   66688 notify.go:220] Checking for updates...
	I0802 19:07:52.393640   66688 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 19:07:52.395373   66688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 19:07:52.396717   66688 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:07:52.398086   66688 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:07:52.399452   66688 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 19:07:52.400729   66688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 19:07:52.402521   66688 config.go:182] Loaded profile config "default-k8s-diff-port-504903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:07:52.402614   66688 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:07:52.402703   66688 config.go:182] Loaded profile config "no-preload-407306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 19:07:52.402771   66688 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 19:07:52.439687   66688 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 19:07:52.440964   66688 start.go:297] selected driver: kvm2
	I0802 19:07:52.440978   66688 start.go:901] validating driver "kvm2" against <nil>
	I0802 19:07:52.440991   66688 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 19:07:52.441693   66688 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:07:52.441767   66688 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 19:07:52.457467   66688 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 19:07:52.457540   66688 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 19:07:52.457796   66688 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:07:52.457864   66688 cni.go:84] Creating CNI manager for ""
	I0802 19:07:52.457882   66688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:07:52.457893   66688 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 19:07:52.457980   66688 start.go:340] cluster config:
	{Name:auto-800809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:07:52.458099   66688 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:07:52.459694   66688 out.go:177] * Starting "auto-800809" primary control-plane node in "auto-800809" cluster
	I0802 19:07:52.460849   66688 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:07:52.460888   66688 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 19:07:52.460898   66688 cache.go:56] Caching tarball of preloaded images
	I0802 19:07:52.461002   66688 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 19:07:52.461012   66688 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 19:07:52.461103   66688 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/config.json ...
	I0802 19:07:52.461119   66688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/config.json: {Name:mkbc30c2051290c26315ed28bd3a600c251b421b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:07:52.461236   66688 start.go:360] acquireMachinesLock for auto-800809: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 19:07:52.461262   66688 start.go:364] duration metric: took 14.697µs to acquireMachinesLock for "auto-800809"
	I0802 19:07:52.461277   66688 start.go:93] Provisioning new machine with config: &{Name:auto-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:07:52.461334   66688 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Aug 02 18:49:43 minikube systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:43 minikube systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 02 18:49:51 no-preload-407306 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:51 no-preload-407306 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:55Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:55Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0802 19:07:55.416997     679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:07:55.418622     679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:07:55.420060     679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:07:55.421595     679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:07:55.422944     679 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 2 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052268] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038133] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.175966] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.956805] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +0.895840] overlayfs: failed to resolve '/var/lib/containers/storage/overlay/compat441482906/lower1': -2
	[  +0.695966] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 2 18:50] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> kernel <==
	 19:07:55 up 18 min,  0 users,  load average: 0.08, 0.02, 0.01
	Linux no-preload-407306 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 19:07:55.015850   66892 logs.go:273] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:54Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:54Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:54Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:54Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:54Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:54Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:55.055929   66892 logs.go:273] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:55Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:55Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:55.090729   66892 logs.go:273] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:55Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:55Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:55.124025   66892 logs.go:273] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:55Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:55Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:55.160412   66892 logs.go:273] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:55Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:55Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:55.200668   66892 logs.go:273] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:55Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:55Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:55.237257   66892 logs.go:273] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:55Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:55Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:55.267533   66892 logs.go:273] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:55Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:55Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:55Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306: exit status 2 (225.834015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "no-preload-407306" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (541.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (466.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-02 19:10:32.807739402 +0000 UTC m=+6253.925907020
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-504903 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-504903 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.777µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-504903 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-504903 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-504903 logs -n 25: (1.327354762s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo cat                           | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo cat                           | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo cat                           | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo docker                        | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo cat                           | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo cat                           | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo cat                           | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo cat                           | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo                               | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo find                          | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-800809 sudo crio                          | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p kindnet-800809                                    | kindnet-800809        | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC | 02 Aug 24 19:10 UTC |
	| start   | -p custom-flannel-800809                             | custom-flannel-800809 | jenkins | v1.33.1 | 02 Aug 24 19:10 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 19:10:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 19:10:06.763093   71068 out.go:291] Setting OutFile to fd 1 ...
	I0802 19:10:06.763356   71068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:10:06.763366   71068 out.go:304] Setting ErrFile to fd 2...
	I0802 19:10:06.763369   71068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:10:06.763546   71068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 19:10:06.764097   71068 out.go:298] Setting JSON to false
	I0802 19:10:06.765161   71068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6751,"bootTime":1722619056,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 19:10:06.765217   71068 start.go:139] virtualization: kvm guest
	I0802 19:10:06.767321   71068 out.go:177] * [custom-flannel-800809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 19:10:06.769117   71068 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 19:10:06.769121   71068 notify.go:220] Checking for updates...
	I0802 19:10:06.770688   71068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 19:10:06.771962   71068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:10:06.773445   71068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:10:06.774627   71068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 19:10:06.775827   71068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 19:10:06.777403   71068 config.go:182] Loaded profile config "calico-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:10:06.777516   71068 config.go:182] Loaded profile config "default-k8s-diff-port-504903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:10:06.777616   71068 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:10:06.777712   71068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 19:10:06.817808   71068 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 19:10:06.818949   71068 start.go:297] selected driver: kvm2
	I0802 19:10:06.818963   71068 start.go:901] validating driver "kvm2" against <nil>
	I0802 19:10:06.818973   71068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 19:10:06.819784   71068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:10:06.819858   71068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 19:10:06.838378   71068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 19:10:06.838421   71068 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 19:10:06.838675   71068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:10:06.838740   71068 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0802 19:10:06.838767   71068 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0802 19:10:06.838848   71068 start.go:340] cluster config:
	{Name:custom-flannel-800809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:10:06.838957   71068 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:10:06.840881   71068 out.go:177] * Starting "custom-flannel-800809" primary control-plane node in "custom-flannel-800809" cluster
	I0802 19:10:06.841999   71068 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:10:06.842043   71068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 19:10:06.842052   71068 cache.go:56] Caching tarball of preloaded images
	I0802 19:10:06.842143   71068 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 19:10:06.842160   71068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 19:10:06.842240   71068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/config.json ...
	I0802 19:10:06.842268   71068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/config.json: {Name:mkc9461bbb1b19f6b8f078f2b16039326a56aed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:10:06.842387   71068 start.go:360] acquireMachinesLock for custom-flannel-800809: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 19:10:06.842415   71068 start.go:364] duration metric: took 14.636µs to acquireMachinesLock for "custom-flannel-800809"
	I0802 19:10:06.842429   71068 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:10:06.842491   71068 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 19:10:06.118480   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:06.618697   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:07.117930   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:07.617945   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:08.118249   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:08.618850   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:09.118907   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:09.618202   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:10.118624   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:10.618115   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:06.844080   71068 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 19:10:06.844227   71068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:10:06.844265   71068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:10:06.859624   71068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33301
	I0802 19:10:06.860031   71068 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:10:06.860629   71068 main.go:141] libmachine: Using API Version  1
	I0802 19:10:06.860658   71068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:10:06.861058   71068 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:10:06.861276   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetMachineName
	I0802 19:10:06.861466   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .DriverName
	I0802 19:10:06.861637   71068 start.go:159] libmachine.API.Create for "custom-flannel-800809" (driver="kvm2")
	I0802 19:10:06.861664   71068 client.go:168] LocalClient.Create starting
	I0802 19:10:06.861691   71068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 19:10:06.861761   71068 main.go:141] libmachine: Decoding PEM data...
	I0802 19:10:06.861784   71068 main.go:141] libmachine: Parsing certificate...
	I0802 19:10:06.861839   71068 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 19:10:06.861868   71068 main.go:141] libmachine: Decoding PEM data...
	I0802 19:10:06.861878   71068 main.go:141] libmachine: Parsing certificate...
	I0802 19:10:06.861900   71068 main.go:141] libmachine: Running pre-create checks...
	I0802 19:10:06.861909   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .PreCreateCheck
	I0802 19:10:06.862251   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetConfigRaw
	I0802 19:10:06.862634   71068 main.go:141] libmachine: Creating machine...
	I0802 19:10:06.862648   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .Create
	I0802 19:10:06.862803   71068 main.go:141] libmachine: (custom-flannel-800809) Creating KVM machine...
	I0802 19:10:06.864355   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found existing default KVM network
	I0802 19:10:06.866032   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:06.865871   71091 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012df90}
	I0802 19:10:06.866064   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | created network xml: 
	I0802 19:10:06.866081   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | <network>
	I0802 19:10:06.866092   71068 main.go:141] libmachine: (custom-flannel-800809) DBG |   <name>mk-custom-flannel-800809</name>
	I0802 19:10:06.866108   71068 main.go:141] libmachine: (custom-flannel-800809) DBG |   <dns enable='no'/>
	I0802 19:10:06.866116   71068 main.go:141] libmachine: (custom-flannel-800809) DBG |   
	I0802 19:10:06.866125   71068 main.go:141] libmachine: (custom-flannel-800809) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0802 19:10:06.866144   71068 main.go:141] libmachine: (custom-flannel-800809) DBG |     <dhcp>
	I0802 19:10:06.866152   71068 main.go:141] libmachine: (custom-flannel-800809) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0802 19:10:06.866161   71068 main.go:141] libmachine: (custom-flannel-800809) DBG |     </dhcp>
	I0802 19:10:06.866179   71068 main.go:141] libmachine: (custom-flannel-800809) DBG |   </ip>
	I0802 19:10:06.866204   71068 main.go:141] libmachine: (custom-flannel-800809) DBG |   
	I0802 19:10:06.866223   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | </network>
	I0802 19:10:06.866235   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | 
	I0802 19:10:06.871771   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | trying to create private KVM network mk-custom-flannel-800809 192.168.39.0/24...
	I0802 19:10:06.944958   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | private KVM network mk-custom-flannel-800809 192.168.39.0/24 created
	I0802 19:10:06.944988   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:06.944937   71091 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:10:06.945002   71068 main.go:141] libmachine: (custom-flannel-800809) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809 ...
	I0802 19:10:06.945018   71068 main.go:141] libmachine: (custom-flannel-800809) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 19:10:06.945131   71068 main.go:141] libmachine: (custom-flannel-800809) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 19:10:07.197218   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:07.197082   71091 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809/id_rsa...
	I0802 19:10:07.309074   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:07.308924   71091 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809/custom-flannel-800809.rawdisk...
	I0802 19:10:07.309115   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Writing magic tar header
	I0802 19:10:07.309162   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Writing SSH key tar header
	I0802 19:10:07.309190   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:07.309095   71091 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809 ...
	I0802 19:10:07.309228   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809
	I0802 19:10:07.309287   71068 main.go:141] libmachine: (custom-flannel-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809 (perms=drwx------)
	I0802 19:10:07.309309   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 19:10:07.309321   71068 main.go:141] libmachine: (custom-flannel-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 19:10:07.309337   71068 main.go:141] libmachine: (custom-flannel-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 19:10:07.309345   71068 main.go:141] libmachine: (custom-flannel-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 19:10:07.309354   71068 main.go:141] libmachine: (custom-flannel-800809) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 19:10:07.309364   71068 main.go:141] libmachine: (custom-flannel-800809) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 19:10:07.309376   71068 main.go:141] libmachine: (custom-flannel-800809) Creating domain...
	I0802 19:10:07.309391   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:10:07.309428   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 19:10:07.309453   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 19:10:07.309468   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Checking permissions on dir: /home/jenkins
	I0802 19:10:07.309501   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Checking permissions on dir: /home
	I0802 19:10:07.309518   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Skipping /home - not owner
	I0802 19:10:07.310823   71068 main.go:141] libmachine: (custom-flannel-800809) define libvirt domain using xml: 
	I0802 19:10:07.310841   71068 main.go:141] libmachine: (custom-flannel-800809) <domain type='kvm'>
	I0802 19:10:07.310848   71068 main.go:141] libmachine: (custom-flannel-800809)   <name>custom-flannel-800809</name>
	I0802 19:10:07.310856   71068 main.go:141] libmachine: (custom-flannel-800809)   <memory unit='MiB'>3072</memory>
	I0802 19:10:07.310864   71068 main.go:141] libmachine: (custom-flannel-800809)   <vcpu>2</vcpu>
	I0802 19:10:07.310872   71068 main.go:141] libmachine: (custom-flannel-800809)   <features>
	I0802 19:10:07.310887   71068 main.go:141] libmachine: (custom-flannel-800809)     <acpi/>
	I0802 19:10:07.310893   71068 main.go:141] libmachine: (custom-flannel-800809)     <apic/>
	I0802 19:10:07.310898   71068 main.go:141] libmachine: (custom-flannel-800809)     <pae/>
	I0802 19:10:07.310904   71068 main.go:141] libmachine: (custom-flannel-800809)     
	I0802 19:10:07.310909   71068 main.go:141] libmachine: (custom-flannel-800809)   </features>
	I0802 19:10:07.310914   71068 main.go:141] libmachine: (custom-flannel-800809)   <cpu mode='host-passthrough'>
	I0802 19:10:07.310921   71068 main.go:141] libmachine: (custom-flannel-800809)   
	I0802 19:10:07.310930   71068 main.go:141] libmachine: (custom-flannel-800809)   </cpu>
	I0802 19:10:07.310952   71068 main.go:141] libmachine: (custom-flannel-800809)   <os>
	I0802 19:10:07.310968   71068 main.go:141] libmachine: (custom-flannel-800809)     <type>hvm</type>
	I0802 19:10:07.310980   71068 main.go:141] libmachine: (custom-flannel-800809)     <boot dev='cdrom'/>
	I0802 19:10:07.310989   71068 main.go:141] libmachine: (custom-flannel-800809)     <boot dev='hd'/>
	I0802 19:10:07.311001   71068 main.go:141] libmachine: (custom-flannel-800809)     <bootmenu enable='no'/>
	I0802 19:10:07.311011   71068 main.go:141] libmachine: (custom-flannel-800809)   </os>
	I0802 19:10:07.311022   71068 main.go:141] libmachine: (custom-flannel-800809)   <devices>
	I0802 19:10:07.311038   71068 main.go:141] libmachine: (custom-flannel-800809)     <disk type='file' device='cdrom'>
	I0802 19:10:07.311050   71068 main.go:141] libmachine: (custom-flannel-800809)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809/boot2docker.iso'/>
	I0802 19:10:07.311058   71068 main.go:141] libmachine: (custom-flannel-800809)       <target dev='hdc' bus='scsi'/>
	I0802 19:10:07.311064   71068 main.go:141] libmachine: (custom-flannel-800809)       <readonly/>
	I0802 19:10:07.311070   71068 main.go:141] libmachine: (custom-flannel-800809)     </disk>
	I0802 19:10:07.311076   71068 main.go:141] libmachine: (custom-flannel-800809)     <disk type='file' device='disk'>
	I0802 19:10:07.311085   71068 main.go:141] libmachine: (custom-flannel-800809)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 19:10:07.311094   71068 main.go:141] libmachine: (custom-flannel-800809)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809/custom-flannel-800809.rawdisk'/>
	I0802 19:10:07.311130   71068 main.go:141] libmachine: (custom-flannel-800809)       <target dev='hda' bus='virtio'/>
	I0802 19:10:07.311143   71068 main.go:141] libmachine: (custom-flannel-800809)     </disk>
	I0802 19:10:07.311155   71068 main.go:141] libmachine: (custom-flannel-800809)     <interface type='network'>
	I0802 19:10:07.311187   71068 main.go:141] libmachine: (custom-flannel-800809)       <source network='mk-custom-flannel-800809'/>
	I0802 19:10:07.311206   71068 main.go:141] libmachine: (custom-flannel-800809)       <model type='virtio'/>
	I0802 19:10:07.311221   71068 main.go:141] libmachine: (custom-flannel-800809)     </interface>
	I0802 19:10:07.311233   71068 main.go:141] libmachine: (custom-flannel-800809)     <interface type='network'>
	I0802 19:10:07.311246   71068 main.go:141] libmachine: (custom-flannel-800809)       <source network='default'/>
	I0802 19:10:07.311258   71068 main.go:141] libmachine: (custom-flannel-800809)       <model type='virtio'/>
	I0802 19:10:07.311271   71068 main.go:141] libmachine: (custom-flannel-800809)     </interface>
	I0802 19:10:07.311282   71068 main.go:141] libmachine: (custom-flannel-800809)     <serial type='pty'>
	I0802 19:10:07.311294   71068 main.go:141] libmachine: (custom-flannel-800809)       <target port='0'/>
	I0802 19:10:07.311308   71068 main.go:141] libmachine: (custom-flannel-800809)     </serial>
	I0802 19:10:07.311320   71068 main.go:141] libmachine: (custom-flannel-800809)     <console type='pty'>
	I0802 19:10:07.311331   71068 main.go:141] libmachine: (custom-flannel-800809)       <target type='serial' port='0'/>
	I0802 19:10:07.311344   71068 main.go:141] libmachine: (custom-flannel-800809)     </console>
	I0802 19:10:07.311356   71068 main.go:141] libmachine: (custom-flannel-800809)     <rng model='virtio'>
	I0802 19:10:07.311370   71068 main.go:141] libmachine: (custom-flannel-800809)       <backend model='random'>/dev/random</backend>
	I0802 19:10:07.311396   71068 main.go:141] libmachine: (custom-flannel-800809)     </rng>
	I0802 19:10:07.311408   71068 main.go:141] libmachine: (custom-flannel-800809)     
	I0802 19:10:07.311420   71068 main.go:141] libmachine: (custom-flannel-800809)     
	I0802 19:10:07.311432   71068 main.go:141] libmachine: (custom-flannel-800809)   </devices>
	I0802 19:10:07.311442   71068 main.go:141] libmachine: (custom-flannel-800809) </domain>
	I0802 19:10:07.311465   71068 main.go:141] libmachine: (custom-flannel-800809) 
	I0802 19:10:07.315910   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:0d:e9:4f in network default
	I0802 19:10:07.316600   71068 main.go:141] libmachine: (custom-flannel-800809) Ensuring networks are active...
	I0802 19:10:07.316631   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:07.317761   71068 main.go:141] libmachine: (custom-flannel-800809) Ensuring network default is active
	I0802 19:10:07.318086   71068 main.go:141] libmachine: (custom-flannel-800809) Ensuring network mk-custom-flannel-800809 is active
	I0802 19:10:07.318752   71068 main.go:141] libmachine: (custom-flannel-800809) Getting domain xml...
	I0802 19:10:07.319724   71068 main.go:141] libmachine: (custom-flannel-800809) Creating domain...
	I0802 19:10:08.626157   71068 main.go:141] libmachine: (custom-flannel-800809) Waiting to get IP...
	I0802 19:10:08.626941   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:08.627431   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:08.627478   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:08.627418   71091 retry.go:31] will retry after 188.418915ms: waiting for machine to come up
	I0802 19:10:08.817987   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:08.818563   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:08.818599   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:08.818503   71091 retry.go:31] will retry after 239.695134ms: waiting for machine to come up
	I0802 19:10:09.059789   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:09.060426   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:09.060455   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:09.060383   71091 retry.go:31] will retry after 468.661087ms: waiting for machine to come up
	I0802 19:10:09.531244   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:09.531806   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:09.531831   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:09.531764   71091 retry.go:31] will retry after 589.810168ms: waiting for machine to come up
	I0802 19:10:10.123464   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:10.124020   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:10.124048   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:10.123972   71091 retry.go:31] will retry after 534.179774ms: waiting for machine to come up
	I0802 19:10:10.659346   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:10.659874   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:10.659912   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:10.659835   71091 retry.go:31] will retry after 944.465744ms: waiting for machine to come up
	I0802 19:10:11.606624   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:11.607496   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:11.607559   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:11.607441   71091 retry.go:31] will retry after 892.398501ms: waiting for machine to come up
	I0802 19:10:11.118345   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:11.617951   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:12.118739   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:12.618518   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:13.117908   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:13.618119   69358 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:10:13.744704   69358 kubeadm.go:1113] duration metric: took 11.248598357s to wait for elevateKubeSystemPrivileges
	I0802 19:10:13.744753   69358 kubeadm.go:394] duration metric: took 23.967148472s to StartCluster
	I0802 19:10:13.744775   69358 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:10:13.744864   69358 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:10:13.747235   69358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:10:13.747455   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 19:10:13.747484   69358 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 19:10:13.747457   69358 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.154 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:10:13.747544   69358 addons.go:69] Setting storage-provisioner=true in profile "calico-800809"
	I0802 19:10:13.747575   69358 addons.go:69] Setting default-storageclass=true in profile "calico-800809"
	I0802 19:10:13.747646   69358 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-800809"
	I0802 19:10:13.747689   69358 config.go:182] Loaded profile config "calico-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:10:13.747587   69358 addons.go:234] Setting addon storage-provisioner=true in "calico-800809"
	I0802 19:10:13.747764   69358 host.go:66] Checking if "calico-800809" exists ...
	I0802 19:10:13.748078   69358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:10:13.748137   69358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:10:13.748214   69358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:10:13.748271   69358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:10:13.750065   69358 out.go:177] * Verifying Kubernetes components...
	I0802 19:10:13.751408   69358 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:10:13.765296   69358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43791
	I0802 19:10:13.765863   69358 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:10:13.766439   69358 main.go:141] libmachine: Using API Version  1
	I0802 19:10:13.766467   69358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:10:13.766967   69358 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:10:13.767207   69358 main.go:141] libmachine: (calico-800809) Calling .GetState
	I0802 19:10:13.768472   69358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35533
	I0802 19:10:13.769021   69358 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:10:13.769570   69358 main.go:141] libmachine: Using API Version  1
	I0802 19:10:13.769601   69358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:10:13.771399   69358 addons.go:234] Setting addon default-storageclass=true in "calico-800809"
	I0802 19:10:13.771443   69358 host.go:66] Checking if "calico-800809" exists ...
	I0802 19:10:13.771803   69358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:10:13.771840   69358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:10:13.772505   69358 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:10:13.773179   69358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:10:13.773222   69358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:10:13.788437   69358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33665
	I0802 19:10:13.789004   69358 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:10:13.789777   69358 main.go:141] libmachine: Using API Version  1
	I0802 19:10:13.789801   69358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:10:13.790130   69358 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:10:13.790787   69358 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:10:13.790825   69358 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:10:13.794078   69358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0802 19:10:13.796808   69358 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:10:13.797446   69358 main.go:141] libmachine: Using API Version  1
	I0802 19:10:13.797468   69358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:10:13.798021   69358 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:10:13.798235   69358 main.go:141] libmachine: (calico-800809) Calling .GetState
	I0802 19:10:13.800563   69358 main.go:141] libmachine: (calico-800809) Calling .DriverName
	I0802 19:10:13.804518   69358 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 19:10:13.806085   69358 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:10:13.806105   69358 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 19:10:13.806125   69358 main.go:141] libmachine: (calico-800809) Calling .GetSSHHostname
	I0802 19:10:13.808657   69358 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40703
	I0802 19:10:13.809178   69358 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:10:13.809708   69358 main.go:141] libmachine: Using API Version  1
	I0802 19:10:13.809733   69358 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:10:13.809801   69358 main.go:141] libmachine: (calico-800809) DBG | domain calico-800809 has defined MAC address 52:54:00:41:2b:d4 in network mk-calico-800809
	I0802 19:10:13.810120   69358 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:10:13.810255   69358 main.go:141] libmachine: (calico-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:d4", ip: ""} in network mk-calico-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:09:35 +0000 UTC Type:0 Mac:52:54:00:41:2b:d4 Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:calico-800809 Clientid:01:52:54:00:41:2b:d4}
	I0802 19:10:13.810327   69358 main.go:141] libmachine: (calico-800809) DBG | domain calico-800809 has defined IP address 192.168.50.154 and MAC address 52:54:00:41:2b:d4 in network mk-calico-800809
	I0802 19:10:13.810365   69358 main.go:141] libmachine: (calico-800809) Calling .GetState
	I0802 19:10:13.810600   69358 main.go:141] libmachine: (calico-800809) Calling .GetSSHPort
	I0802 19:10:13.810758   69358 main.go:141] libmachine: (calico-800809) Calling .GetSSHKeyPath
	I0802 19:10:13.811077   69358 main.go:141] libmachine: (calico-800809) Calling .GetSSHUsername
	I0802 19:10:13.811427   69358 sshutil.go:53] new ssh client: &{IP:192.168.50.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/calico-800809/id_rsa Username:docker}
	I0802 19:10:13.811974   69358 main.go:141] libmachine: (calico-800809) Calling .DriverName
	I0802 19:10:13.812146   69358 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 19:10:13.812158   69358 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 19:10:13.812173   69358 main.go:141] libmachine: (calico-800809) Calling .GetSSHHostname
	I0802 19:10:13.815687   69358 main.go:141] libmachine: (calico-800809) DBG | domain calico-800809 has defined MAC address 52:54:00:41:2b:d4 in network mk-calico-800809
	I0802 19:10:13.816180   69358 main.go:141] libmachine: (calico-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:2b:d4", ip: ""} in network mk-calico-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:09:35 +0000 UTC Type:0 Mac:52:54:00:41:2b:d4 Iaid: IPaddr:192.168.50.154 Prefix:24 Hostname:calico-800809 Clientid:01:52:54:00:41:2b:d4}
	I0802 19:10:13.816200   69358 main.go:141] libmachine: (calico-800809) DBG | domain calico-800809 has defined IP address 192.168.50.154 and MAC address 52:54:00:41:2b:d4 in network mk-calico-800809
	I0802 19:10:13.816368   69358 main.go:141] libmachine: (calico-800809) Calling .GetSSHPort
	I0802 19:10:13.816550   69358 main.go:141] libmachine: (calico-800809) Calling .GetSSHKeyPath
	I0802 19:10:13.816723   69358 main.go:141] libmachine: (calico-800809) Calling .GetSSHUsername
	I0802 19:10:13.816829   69358 sshutil.go:53] new ssh client: &{IP:192.168.50.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/calico-800809/id_rsa Username:docker}
	I0802 19:10:14.101902   69358 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:10:14.101959   69358 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 19:10:14.113688   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:10:14.117277   69358 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 19:10:14.164775   69358 node_ready.go:35] waiting up to 15m0s for node "calico-800809" to be "Ready" ...
	I0802 19:10:14.687599   69358 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0802 19:10:15.001739   69358 main.go:141] libmachine: Making call to close driver server
	I0802 19:10:15.001769   69358 main.go:141] libmachine: (calico-800809) Calling .Close
	I0802 19:10:15.001832   69358 main.go:141] libmachine: Making call to close driver server
	I0802 19:10:15.001860   69358 main.go:141] libmachine: (calico-800809) Calling .Close
	I0802 19:10:15.002065   69358 main.go:141] libmachine: (calico-800809) DBG | Closing plugin on server side
	I0802 19:10:15.002099   69358 main.go:141] libmachine: (calico-800809) DBG | Closing plugin on server side
	I0802 19:10:15.002130   69358 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:10:15.002137   69358 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:10:15.002152   69358 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:10:15.002166   69358 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:10:15.002174   69358 main.go:141] libmachine: Making call to close driver server
	I0802 19:10:15.002182   69358 main.go:141] libmachine: (calico-800809) Calling .Close
	I0802 19:10:15.002236   69358 main.go:141] libmachine: Making call to close driver server
	I0802 19:10:15.002259   69358 main.go:141] libmachine: (calico-800809) Calling .Close
	I0802 19:10:15.002384   69358 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:10:15.002436   69358 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:10:15.002396   69358 main.go:141] libmachine: (calico-800809) DBG | Closing plugin on server side
	I0802 19:10:15.002625   69358 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:10:15.002643   69358 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:10:15.012338   69358 main.go:141] libmachine: Making call to close driver server
	I0802 19:10:15.012365   69358 main.go:141] libmachine: (calico-800809) Calling .Close
	I0802 19:10:15.012630   69358 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:10:15.012692   69358 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:10:15.012670   69358 main.go:141] libmachine: (calico-800809) DBG | Closing plugin on server side
	I0802 19:10:15.014477   69358 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0802 19:10:15.016116   69358 addons.go:510] duration metric: took 1.268637864s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0802 19:10:15.198422   69358 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-800809" context rescaled to 1 replicas
	I0802 19:10:12.500982   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:12.501347   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:12.501455   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:12.501340   71091 retry.go:31] will retry after 907.090141ms: waiting for machine to come up
	I0802 19:10:13.409770   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:13.410217   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:13.410247   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:13.410192   71091 retry.go:31] will retry after 1.454855897s: waiting for machine to come up
	I0802 19:10:14.866820   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:14.867299   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:14.867328   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:14.867243   71091 retry.go:31] will retry after 1.823177158s: waiting for machine to come up
	I0802 19:10:16.692397   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:16.692818   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:16.692844   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:16.692765   71091 retry.go:31] will retry after 1.958576933s: waiting for machine to come up
	I0802 19:10:16.168764   69358 node_ready.go:53] node "calico-800809" has status "Ready":"False"
	I0802 19:10:18.169800   69358 node_ready.go:53] node "calico-800809" has status "Ready":"False"
	I0802 19:10:20.667626   69358 node_ready.go:53] node "calico-800809" has status "Ready":"False"
	I0802 19:10:18.652714   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:18.653216   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:18.653249   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:18.653155   71091 retry.go:31] will retry after 2.527651367s: waiting for machine to come up
	I0802 19:10:21.181833   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:21.182282   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:21.182311   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:21.182246   71091 retry.go:31] will retry after 3.046730438s: waiting for machine to come up
	I0802 19:10:22.668025   69358 node_ready.go:49] node "calico-800809" has status "Ready":"True"
	I0802 19:10:22.668051   69358 node_ready.go:38] duration metric: took 8.503242113s for node "calico-800809" to be "Ready" ...
	I0802 19:10:22.668062   69358 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:10:22.677670   69358 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-564985c589-9dvqq" in "kube-system" namespace to be "Ready" ...
	I0802 19:10:24.683986   69358 pod_ready.go:102] pod "calico-kube-controllers-564985c589-9dvqq" in "kube-system" namespace has status "Ready":"False"
	I0802 19:10:24.231892   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:24.232277   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find current IP address of domain custom-flannel-800809 in network mk-custom-flannel-800809
	I0802 19:10:24.232299   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | I0802 19:10:24.232239   71091 retry.go:31] will retry after 3.601868652s: waiting for machine to come up
	I0802 19:10:26.684104   69358 pod_ready.go:102] pod "calico-kube-controllers-564985c589-9dvqq" in "kube-system" namespace has status "Ready":"False"
	I0802 19:10:29.480589   69358 pod_ready.go:102] pod "calico-kube-controllers-564985c589-9dvqq" in "kube-system" namespace has status "Ready":"False"
	I0802 19:10:27.835866   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:27.836424   71068 main.go:141] libmachine: (custom-flannel-800809) Found IP for machine: 192.168.39.226
	I0802 19:10:27.836443   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has current primary IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:27.836450   71068 main.go:141] libmachine: (custom-flannel-800809) Reserving static IP address...
	I0802 19:10:27.836854   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | unable to find host DHCP lease matching {name: "custom-flannel-800809", mac: "52:54:00:36:e8:20", ip: "192.168.39.226"} in network mk-custom-flannel-800809
	I0802 19:10:27.916736   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Getting to WaitForSSH function...
	I0802 19:10:27.916779   71068 main.go:141] libmachine: (custom-flannel-800809) Reserved static IP address: 192.168.39.226
	I0802 19:10:27.916796   71068 main.go:141] libmachine: (custom-flannel-800809) Waiting for SSH to be available...
	I0802 19:10:27.919485   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:27.920144   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:minikube Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:27.920178   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:27.920321   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Using SSH client type: external
	I0802 19:10:27.920367   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809/id_rsa (-rw-------)
	I0802 19:10:27.920412   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 19:10:27.920434   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | About to run SSH command:
	I0802 19:10:27.920448   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | exit 0
	I0802 19:10:28.051966   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | SSH cmd err, output: <nil>: 
	I0802 19:10:28.052475   71068 main.go:141] libmachine: (custom-flannel-800809) KVM machine creation complete!
	I0802 19:10:28.052722   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetConfigRaw
	I0802 19:10:28.053362   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .DriverName
	I0802 19:10:28.053566   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .DriverName
	I0802 19:10:28.053721   71068 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 19:10:28.053742   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetState
	I0802 19:10:28.055426   71068 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 19:10:28.055440   71068 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 19:10:28.055448   71068 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 19:10:28.055457   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHHostname
	I0802 19:10:28.058401   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.058903   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:28.058937   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.059094   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHPort
	I0802 19:10:28.059301   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:28.059490   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:28.059665   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHUsername
	I0802 19:10:28.059853   71068 main.go:141] libmachine: Using SSH client type: native
	I0802 19:10:28.060051   71068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0802 19:10:28.060063   71068 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 19:10:28.170258   71068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:10:28.170285   71068 main.go:141] libmachine: Detecting the provisioner...
	I0802 19:10:28.170296   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHHostname
	I0802 19:10:28.172944   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.173851   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:28.173898   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.174048   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHPort
	I0802 19:10:28.174244   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:28.174400   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:28.174553   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHUsername
	I0802 19:10:28.174696   71068 main.go:141] libmachine: Using SSH client type: native
	I0802 19:10:28.174869   71068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0802 19:10:28.174884   71068 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 19:10:28.284162   71068 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 19:10:28.284226   71068 main.go:141] libmachine: found compatible host: buildroot
	I0802 19:10:28.284236   71068 main.go:141] libmachine: Provisioning with buildroot...
	I0802 19:10:28.284244   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetMachineName
	I0802 19:10:28.284494   71068 buildroot.go:166] provisioning hostname "custom-flannel-800809"
	I0802 19:10:28.284514   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetMachineName
	I0802 19:10:28.284713   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHHostname
	I0802 19:10:28.288258   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.288772   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:28.288802   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.288981   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHPort
	I0802 19:10:28.289165   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:28.289336   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:28.289492   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHUsername
	I0802 19:10:28.289680   71068 main.go:141] libmachine: Using SSH client type: native
	I0802 19:10:28.289855   71068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0802 19:10:28.289869   71068 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-800809 && echo "custom-flannel-800809" | sudo tee /etc/hostname
	I0802 19:10:28.417521   71068 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-800809
	
	I0802 19:10:28.417584   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHHostname
	I0802 19:10:28.420952   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.421377   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:28.421407   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.421637   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHPort
	I0802 19:10:28.421840   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:28.421993   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:28.422161   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHUsername
	I0802 19:10:28.422363   71068 main.go:141] libmachine: Using SSH client type: native
	I0802 19:10:28.422578   71068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0802 19:10:28.422604   71068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-800809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-800809/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-800809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 19:10:28.545130   71068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:10:28.545169   71068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 19:10:28.545200   71068 buildroot.go:174] setting up certificates
	I0802 19:10:28.545211   71068 provision.go:84] configureAuth start
	I0802 19:10:28.545229   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetMachineName
	I0802 19:10:28.545514   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetIP
	I0802 19:10:28.548353   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.548775   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:28.548814   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.548972   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHHostname
	I0802 19:10:28.551406   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.551804   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:28.551824   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.551971   71068 provision.go:143] copyHostCerts
	I0802 19:10:28.552042   71068 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 19:10:28.552056   71068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 19:10:28.552129   71068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 19:10:28.552211   71068 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 19:10:28.552218   71068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 19:10:28.552245   71068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 19:10:28.552294   71068 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 19:10:28.552301   71068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 19:10:28.552322   71068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 19:10:28.552368   71068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-800809 san=[127.0.0.1 192.168.39.226 custom-flannel-800809 localhost minikube]
	I0802 19:10:28.657573   71068 provision.go:177] copyRemoteCerts
	I0802 19:10:28.657622   71068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 19:10:28.657643   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHHostname
	I0802 19:10:28.660386   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.660827   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:28.660865   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.661054   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHPort
	I0802 19:10:28.661242   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:28.661399   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHUsername
	I0802 19:10:28.661531   71068 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809/id_rsa Username:docker}
	I0802 19:10:28.748752   71068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 19:10:28.777174   71068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0802 19:10:28.802446   71068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 19:10:28.829734   71068 provision.go:87] duration metric: took 284.505578ms to configureAuth
	I0802 19:10:28.829763   71068 buildroot.go:189] setting minikube options for container-runtime
	I0802 19:10:28.830006   71068 config.go:182] Loaded profile config "custom-flannel-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:10:28.830108   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHHostname
	I0802 19:10:28.833277   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.833640   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:28.833666   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:28.833868   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHPort
	I0802 19:10:28.834074   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:28.834261   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:28.834427   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHUsername
	I0802 19:10:28.834632   71068 main.go:141] libmachine: Using SSH client type: native
	I0802 19:10:28.834787   71068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0802 19:10:28.834801   71068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 19:10:29.122607   71068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 19:10:29.122639   71068 main.go:141] libmachine: Checking connection to Docker...
	I0802 19:10:29.122651   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetURL
	I0802 19:10:29.124114   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | Using libvirt version 6000000
	I0802 19:10:29.127163   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.127575   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:29.127607   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.127797   71068 main.go:141] libmachine: Docker is up and running!
	I0802 19:10:29.127814   71068 main.go:141] libmachine: Reticulating splines...
	I0802 19:10:29.127822   71068 client.go:171] duration metric: took 22.26615099s to LocalClient.Create
	I0802 19:10:29.127847   71068 start.go:167] duration metric: took 22.266211901s to libmachine.API.Create "custom-flannel-800809"
	I0802 19:10:29.127859   71068 start.go:293] postStartSetup for "custom-flannel-800809" (driver="kvm2")
	I0802 19:10:29.127877   71068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 19:10:29.127900   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .DriverName
	I0802 19:10:29.128175   71068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 19:10:29.128202   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHHostname
	I0802 19:10:29.130717   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.131044   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:29.131066   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.131234   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHPort
	I0802 19:10:29.131416   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:29.131581   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHUsername
	I0802 19:10:29.131727   71068 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809/id_rsa Username:docker}
	I0802 19:10:29.217607   71068 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 19:10:29.221919   71068 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 19:10:29.221949   71068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 19:10:29.222020   71068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 19:10:29.222118   71068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 19:10:29.222236   71068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 19:10:29.233790   71068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:10:29.258047   71068 start.go:296] duration metric: took 130.169119ms for postStartSetup
	I0802 19:10:29.258106   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetConfigRaw
	I0802 19:10:29.382787   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetIP
	I0802 19:10:29.386446   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.386880   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:29.386908   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.387336   71068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/config.json ...
	I0802 19:10:29.387571   71068 start.go:128] duration metric: took 22.545070573s to createHost
	I0802 19:10:29.387608   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHHostname
	I0802 19:10:29.390056   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.390473   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:29.390504   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.390686   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHPort
	I0802 19:10:29.390877   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:29.391064   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:29.391263   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHUsername
	I0802 19:10:29.391410   71068 main.go:141] libmachine: Using SSH client type: native
	I0802 19:10:29.391615   71068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0802 19:10:29.391635   71068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 19:10:29.504138   71068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722625829.478224483
	
	I0802 19:10:29.504168   71068 fix.go:216] guest clock: 1722625829.478224483
	I0802 19:10:29.504178   71068 fix.go:229] Guest: 2024-08-02 19:10:29.478224483 +0000 UTC Remote: 2024-08-02 19:10:29.38759303 +0000 UTC m=+22.658308665 (delta=90.631453ms)
	I0802 19:10:29.504206   71068 fix.go:200] guest clock delta is within tolerance: 90.631453ms
	I0802 19:10:29.504213   71068 start.go:83] releasing machines lock for "custom-flannel-800809", held for 22.661791954s
	I0802 19:10:29.504237   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .DriverName
	I0802 19:10:29.504495   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetIP
	I0802 19:10:29.507470   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.507812   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:29.507837   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.508061   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .DriverName
	I0802 19:10:29.508578   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .DriverName
	I0802 19:10:29.508766   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .DriverName
	I0802 19:10:29.508865   71068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 19:10:29.508903   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHHostname
	I0802 19:10:29.509004   71068 ssh_runner.go:195] Run: cat /version.json
	I0802 19:10:29.509031   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHHostname
	I0802 19:10:29.511725   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.512168   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.512203   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:29.512220   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.512391   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHPort
	I0802 19:10:29.512472   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:29.512502   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:29.512601   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:29.512693   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHPort
	I0802 19:10:29.512741   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHUsername
	I0802 19:10:29.512864   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHKeyPath
	I0802 19:10:29.512922   71068 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809/id_rsa Username:docker}
	I0802 19:10:29.513192   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetSSHUsername
	I0802 19:10:29.513345   71068 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/custom-flannel-800809/id_rsa Username:docker}
	I0802 19:10:29.592613   71068 ssh_runner.go:195] Run: systemctl --version
	I0802 19:10:29.628070   71068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 19:10:29.788701   71068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 19:10:29.794600   71068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 19:10:29.794676   71068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 19:10:29.815973   71068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 19:10:29.815998   71068 start.go:495] detecting cgroup driver to use...
	I0802 19:10:29.816067   71068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 19:10:29.837967   71068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 19:10:29.852462   71068 docker.go:217] disabling cri-docker service (if available) ...
	I0802 19:10:29.852542   71068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 19:10:29.866482   71068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 19:10:29.880589   71068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 19:10:30.019884   71068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 19:10:30.171689   71068 docker.go:233] disabling docker service ...
	I0802 19:10:30.171749   71068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 19:10:30.189482   71068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 19:10:30.205899   71068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 19:10:30.355365   71068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 19:10:30.475151   71068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 19:10:30.492552   71068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 19:10:30.513996   71068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 19:10:30.514082   71068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:10:30.527223   71068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 19:10:30.527296   71068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:10:30.540101   71068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:10:30.552454   71068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:10:30.565570   71068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 19:10:30.576692   71068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:10:30.588095   71068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:10:30.608229   71068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:10:30.621435   71068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 19:10:30.632549   71068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 19:10:30.632615   71068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 19:10:30.646326   71068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 19:10:30.658009   71068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:10:30.785684   71068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 19:10:30.926134   71068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 19:10:30.926218   71068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 19:10:30.930908   71068 start.go:563] Will wait 60s for crictl version
	I0802 19:10:30.930992   71068 ssh_runner.go:195] Run: which crictl
	I0802 19:10:30.934714   71068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 19:10:30.980316   71068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 19:10:30.980410   71068 ssh_runner.go:195] Run: crio --version
	I0802 19:10:31.013534   71068 ssh_runner.go:195] Run: crio --version
	I0802 19:10:31.043925   71068 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 19:10:31.045535   71068 main.go:141] libmachine: (custom-flannel-800809) Calling .GetIP
	I0802 19:10:31.048840   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:31.049259   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e8:20", ip: ""} in network mk-custom-flannel-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:10:21 +0000 UTC Type:0 Mac:52:54:00:36:e8:20 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:custom-flannel-800809 Clientid:01:52:54:00:36:e8:20}
	I0802 19:10:31.049301   71068 main.go:141] libmachine: (custom-flannel-800809) DBG | domain custom-flannel-800809 has defined IP address 192.168.39.226 and MAC address 52:54:00:36:e8:20 in network mk-custom-flannel-800809
	I0802 19:10:31.049575   71068 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 19:10:31.054505   71068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:10:31.068354   71068 kubeadm.go:883] updating cluster {Name:custom-flannel-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.30.3 ClusterName:custom-flannel-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.226 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 19:10:31.068471   71068 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:10:31.068527   71068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:10:31.099853   71068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 19:10:31.099912   71068 ssh_runner.go:195] Run: which lz4
	I0802 19:10:31.103996   71068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 19:10:31.108303   71068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 19:10:31.108332   71068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	
	
	==> CRI-O <==
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.529807154Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625833529681109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05653a35-92ff-4639-a844-d882615a1bea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.530414239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=376f3a82-f698-46ca-9cd9-8b9285ebfc8f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.530479646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=376f3a82-f698-46ca-9cd9-8b9285ebfc8f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.530702763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98515615127ff0a1a90381d1a238540b1929d298f4caf66692b3949cef1fda31,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722624591099900014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e253ad56fe42192507b13134336b8ac2a8efdb23290d5101d4f02de146e1de57,PodSandboxId:d8938fd13053c270cba5bf078ee0fd24fcb5363396ba5660dde946cbe9fe632c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722624570972680760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6464f4d-f98c-4dfd-95d1-f5db6f710d13,},Annotations:map[string]string{io.kubernetes.container.hash: b114671f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8510f8b4108229ffcace233ca3c5b5d6c4e9bf1f7bd2a057ee5f0d7c320dc85,PodSandboxId:704466c74825e438a291e8f89401f628521a09ea1739774edabeb86a8fcbc4b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722624567896950144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k46j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aedd5c3-6afd-4c1d-acec-e90822891130,},Annotations:map[string]string{io.kubernetes.container.hash: f3d017fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead1da5f29baa20b541ab5bcdbe966c3ec0c229d7da11b5030d116076811c462,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722624560314888259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9090ed318c1c60d350f923d25db10bce0c8c36bbd2209d04cafd353cce67e7,PodSandboxId:ab12399ad0296a11694422bcc11cea822d740f3c87a03cf589589da4d791f506,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722624560277314483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dfq8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230df431-7597-403f-a0db
-88f4c99077c8,},Annotations:map[string]string{io.kubernetes.container.hash: 54f58de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071fbeaa4252c36bc433759764d2b31fdf184811455485c16dce8eec63263537,PodSandboxId:bda46890c78a9e12867bfa1f19ada2ee39a406297324038d0667f9a6dc8a8727,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722624556762806262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: e4ee491a71c484abb6b84c3384f6b3f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fef22170bccce8cbbe2f21c3857b13d0679e863ad490238c25659e8cd61194,PodSandboxId:d50c8574a1768b46ba052074c7079e80b9c734f2b1c851b65533c0ba4d9f4824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722624556745790439,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0075b7102f3d4859e622f7449072e1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3b871f33afdf83833630439428d1277d90afbaa6a2c7823f3480c7848ea02e,PodSandboxId:4886e9279f8a3d62eea177a867c7ca71e43fe189b085e100d05a0512e7fbe7b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722624556757595985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4735360417a891053493ebfd7525266,},Annotations:map[string]string{io.kubernetes.container.hash: b5c119bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3357056080401eb5e08b9c8e4bc3030c07228d3eeabc5e6b3e9160b511ffb2,PodSandboxId:0d282d253e03fd877cfa2af7583be727cfba826b9d2abe3caa5fea595648e3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722624556730718256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67d341f799b4dc1b5f52e6859a81b6
93,},Annotations:map[string]string{io.kubernetes.container.hash: f3f41259,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=376f3a82-f698-46ca-9cd9-8b9285ebfc8f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.577567098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc7e827c-8e69-403f-861d-861aeefad467 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.577668082Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc7e827c-8e69-403f-861d-861aeefad467 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.578599630Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=196a9821-6ce6-449f-a646-c1ab9703fc97 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.579024787Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625833578995721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=196a9821-6ce6-449f-a646-c1ab9703fc97 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.579581069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=400a9ded-2397-444c-9e83-8c843c94272c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.579648251Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=400a9ded-2397-444c-9e83-8c843c94272c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.579944290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98515615127ff0a1a90381d1a238540b1929d298f4caf66692b3949cef1fda31,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722624591099900014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e253ad56fe42192507b13134336b8ac2a8efdb23290d5101d4f02de146e1de57,PodSandboxId:d8938fd13053c270cba5bf078ee0fd24fcb5363396ba5660dde946cbe9fe632c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722624570972680760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6464f4d-f98c-4dfd-95d1-f5db6f710d13,},Annotations:map[string]string{io.kubernetes.container.hash: b114671f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8510f8b4108229ffcace233ca3c5b5d6c4e9bf1f7bd2a057ee5f0d7c320dc85,PodSandboxId:704466c74825e438a291e8f89401f628521a09ea1739774edabeb86a8fcbc4b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722624567896950144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k46j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aedd5c3-6afd-4c1d-acec-e90822891130,},Annotations:map[string]string{io.kubernetes.container.hash: f3d017fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead1da5f29baa20b541ab5bcdbe966c3ec0c229d7da11b5030d116076811c462,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722624560314888259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9090ed318c1c60d350f923d25db10bce0c8c36bbd2209d04cafd353cce67e7,PodSandboxId:ab12399ad0296a11694422bcc11cea822d740f3c87a03cf589589da4d791f506,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722624560277314483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dfq8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230df431-7597-403f-a0db
-88f4c99077c8,},Annotations:map[string]string{io.kubernetes.container.hash: 54f58de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071fbeaa4252c36bc433759764d2b31fdf184811455485c16dce8eec63263537,PodSandboxId:bda46890c78a9e12867bfa1f19ada2ee39a406297324038d0667f9a6dc8a8727,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722624556762806262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: e4ee491a71c484abb6b84c3384f6b3f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fef22170bccce8cbbe2f21c3857b13d0679e863ad490238c25659e8cd61194,PodSandboxId:d50c8574a1768b46ba052074c7079e80b9c734f2b1c851b65533c0ba4d9f4824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722624556745790439,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0075b7102f3d4859e622f7449072e1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3b871f33afdf83833630439428d1277d90afbaa6a2c7823f3480c7848ea02e,PodSandboxId:4886e9279f8a3d62eea177a867c7ca71e43fe189b085e100d05a0512e7fbe7b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722624556757595985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4735360417a891053493ebfd7525266,},Annotations:map[string]string{io.kubernetes.container.hash: b5c119bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3357056080401eb5e08b9c8e4bc3030c07228d3eeabc5e6b3e9160b511ffb2,PodSandboxId:0d282d253e03fd877cfa2af7583be727cfba826b9d2abe3caa5fea595648e3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722624556730718256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67d341f799b4dc1b5f52e6859a81b6
93,},Annotations:map[string]string{io.kubernetes.container.hash: f3f41259,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=400a9ded-2397-444c-9e83-8c843c94272c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.619180496Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb1859e9-8224-41a8-ad5f-6814d65c3c78 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.619323110Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb1859e9-8224-41a8-ad5f-6814d65c3c78 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.620959464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e29190c-d140-4078-91dd-3ccfeda252cb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.621504497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625833621471166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e29190c-d140-4078-91dd-3ccfeda252cb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.622186665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e377481-d4d7-4987-ba8f-a8c0303ce48d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.622311067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e377481-d4d7-4987-ba8f-a8c0303ce48d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.622568879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98515615127ff0a1a90381d1a238540b1929d298f4caf66692b3949cef1fda31,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722624591099900014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e253ad56fe42192507b13134336b8ac2a8efdb23290d5101d4f02de146e1de57,PodSandboxId:d8938fd13053c270cba5bf078ee0fd24fcb5363396ba5660dde946cbe9fe632c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722624570972680760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6464f4d-f98c-4dfd-95d1-f5db6f710d13,},Annotations:map[string]string{io.kubernetes.container.hash: b114671f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8510f8b4108229ffcace233ca3c5b5d6c4e9bf1f7bd2a057ee5f0d7c320dc85,PodSandboxId:704466c74825e438a291e8f89401f628521a09ea1739774edabeb86a8fcbc4b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722624567896950144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k46j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aedd5c3-6afd-4c1d-acec-e90822891130,},Annotations:map[string]string{io.kubernetes.container.hash: f3d017fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead1da5f29baa20b541ab5bcdbe966c3ec0c229d7da11b5030d116076811c462,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722624560314888259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9090ed318c1c60d350f923d25db10bce0c8c36bbd2209d04cafd353cce67e7,PodSandboxId:ab12399ad0296a11694422bcc11cea822d740f3c87a03cf589589da4d791f506,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722624560277314483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dfq8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230df431-7597-403f-a0db
-88f4c99077c8,},Annotations:map[string]string{io.kubernetes.container.hash: 54f58de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071fbeaa4252c36bc433759764d2b31fdf184811455485c16dce8eec63263537,PodSandboxId:bda46890c78a9e12867bfa1f19ada2ee39a406297324038d0667f9a6dc8a8727,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722624556762806262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: e4ee491a71c484abb6b84c3384f6b3f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fef22170bccce8cbbe2f21c3857b13d0679e863ad490238c25659e8cd61194,PodSandboxId:d50c8574a1768b46ba052074c7079e80b9c734f2b1c851b65533c0ba4d9f4824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722624556745790439,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0075b7102f3d4859e622f7449072e1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3b871f33afdf83833630439428d1277d90afbaa6a2c7823f3480c7848ea02e,PodSandboxId:4886e9279f8a3d62eea177a867c7ca71e43fe189b085e100d05a0512e7fbe7b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722624556757595985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4735360417a891053493ebfd7525266,},Annotations:map[string]string{io.kubernetes.container.hash: b5c119bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3357056080401eb5e08b9c8e4bc3030c07228d3eeabc5e6b3e9160b511ffb2,PodSandboxId:0d282d253e03fd877cfa2af7583be727cfba826b9d2abe3caa5fea595648e3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722624556730718256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67d341f799b4dc1b5f52e6859a81b6
93,},Annotations:map[string]string{io.kubernetes.container.hash: f3f41259,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e377481-d4d7-4987-ba8f-a8c0303ce48d name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.659090298Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf881f54-bfd5-4fb2-80e2-50ba43e2f19c name=/runtime.v1.RuntimeService/Version
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.659173761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf881f54-bfd5-4fb2-80e2-50ba43e2f19c name=/runtime.v1.RuntimeService/Version
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.660787409Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e4c526a-5edd-45bc-833e-521581951ac5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.661867098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625833661825636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e4c526a-5edd-45bc-833e-521581951ac5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.662384070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eda3236d-2a00-4e0b-8be6-1b23c3c43fdf name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.662459864Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eda3236d-2a00-4e0b-8be6-1b23c3c43fdf name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:10:33 default-k8s-diff-port-504903 crio[721]: time="2024-08-02 19:10:33.663332009Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:98515615127ff0a1a90381d1a238540b1929d298f4caf66692b3949cef1fda31,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722624591099900014,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e253ad56fe42192507b13134336b8ac2a8efdb23290d5101d4f02de146e1de57,PodSandboxId:d8938fd13053c270cba5bf078ee0fd24fcb5363396ba5660dde946cbe9fe632c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722624570972680760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6464f4d-f98c-4dfd-95d1-f5db6f710d13,},Annotations:map[string]string{io.kubernetes.container.hash: b114671f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8510f8b4108229ffcace233ca3c5b5d6c4e9bf1f7bd2a057ee5f0d7c320dc85,PodSandboxId:704466c74825e438a291e8f89401f628521a09ea1739774edabeb86a8fcbc4b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722624567896950144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-k46j2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3aedd5c3-6afd-4c1d-acec-e90822891130,},Annotations:map[string]string{io.kubernetes.container.hash: f3d017fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ead1da5f29baa20b541ab5bcdbe966c3ec0c229d7da11b5030d116076811c462,PodSandboxId:c2c06fa11f752038f3f59e5f738335781dfbefaad881077f2b667140a0397d45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722624560314888259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: a7763010-83da-4af0-a923-9bf8f4508403,},Annotations:map[string]string{io.kubernetes.container.hash: 3af3763d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d9090ed318c1c60d350f923d25db10bce0c8c36bbd2209d04cafd353cce67e7,PodSandboxId:ab12399ad0296a11694422bcc11cea822d740f3c87a03cf589589da4d791f506,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722624560277314483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dfq8b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230df431-7597-403f-a0db
-88f4c99077c8,},Annotations:map[string]string{io.kubernetes.container.hash: 54f58de3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071fbeaa4252c36bc433759764d2b31fdf184811455485c16dce8eec63263537,PodSandboxId:bda46890c78a9e12867bfa1f19ada2ee39a406297324038d0667f9a6dc8a8727,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722624556762806262,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: e4ee491a71c484abb6b84c3384f6b3f1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54fef22170bccce8cbbe2f21c3857b13d0679e863ad490238c25659e8cd61194,PodSandboxId:d50c8574a1768b46ba052074c7079e80b9c734f2b1c851b65533c0ba4d9f4824,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722624556745790439,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 0075b7102f3d4859e622f7449072e1d1,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3b871f33afdf83833630439428d1277d90afbaa6a2c7823f3480c7848ea02e,PodSandboxId:4886e9279f8a3d62eea177a867c7ca71e43fe189b085e100d05a0512e7fbe7b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722624556757595985,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: b4735360417a891053493ebfd7525266,},Annotations:map[string]string{io.kubernetes.container.hash: b5c119bd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3357056080401eb5e08b9c8e4bc3030c07228d3eeabc5e6b3e9160b511ffb2,PodSandboxId:0d282d253e03fd877cfa2af7583be727cfba826b9d2abe3caa5fea595648e3a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722624556730718256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-504903,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67d341f799b4dc1b5f52e6859a81b6
93,},Annotations:map[string]string{io.kubernetes.container.hash: f3f41259,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eda3236d-2a00-4e0b-8be6-1b23c3c43fdf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	98515615127ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   c2c06fa11f752       storage-provisioner
	e253ad56fe421       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   d8938fd13053c       busybox
	d8510f8b41082       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   704466c74825e       coredns-7db6d8ff4d-k46j2
	ead1da5f29baa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   c2c06fa11f752       storage-provisioner
	1d9090ed318c1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      21 minutes ago      Running             kube-proxy                1                   ab12399ad0296       kube-proxy-dfq8b
	071fbeaa4252c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      21 minutes ago      Running             kube-controller-manager   1                   bda46890c78a9       kube-controller-manager-default-k8s-diff-port-504903
	8a3b871f33afd       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      21 minutes ago      Running             kube-apiserver            1                   4886e9279f8a3       kube-apiserver-default-k8s-diff-port-504903
	54fef22170bcc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      21 minutes ago      Running             kube-scheduler            1                   d50c8574a1768       kube-scheduler-default-k8s-diff-port-504903
	3c33570560804       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      21 minutes ago      Running             etcd                      1                   0d282d253e03f       etcd-default-k8s-diff-port-504903
	
	
	==> coredns [d8510f8b4108229ffcace233ca3c5b5d6c4e9bf1f7bd2a057ee5f0d7c320dc85] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49594 - 49242 "HINFO IN 3514459592400852423.5345974895971787697. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01186339s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-504903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-504903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=default-k8s-diff-port-504903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T18_41_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 18:41:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-504903
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 19:10:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 19:10:14 +0000   Fri, 02 Aug 2024 18:41:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 19:10:14 +0000   Fri, 02 Aug 2024 18:41:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 19:10:14 +0000   Fri, 02 Aug 2024 18:41:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 19:10:14 +0000   Fri, 02 Aug 2024 18:49:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.183
	  Hostname:    default-k8s-diff-port-504903
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 91cb828529e14304a21266cb2b67ace8
	  System UUID:                91cb8285-29e1-4304-a212-66cb2b67ace8
	  Boot ID:                    97c0214f-7b2f-4890-b6ff-13cd401e038f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-k46j2                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-504903                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-504903             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-504903    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-dfq8b                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-504903             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-pw5tt                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-504903 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-504903 event: Registered Node default-k8s-diff-port-504903 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-504903 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-504903 event: Registered Node default-k8s-diff-port-504903 in Controller
	
	
	==> dmesg <==
	[Aug 2 18:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051873] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037524] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.871149] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.807454] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.493783] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 2 18:49] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.063128] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058743] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.196163] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.120913] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.269957] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.126810] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +1.689698] systemd-fstab-generator[921]: Ignoring "noauto" option for root device
	[  +0.060987] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.527787] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.372051] systemd-fstab-generator[1537]: Ignoring "noauto" option for root device
	[  +3.341950] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.078763] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [3c3357056080401eb5e08b9c8e4bc3030c07228d3eeabc5e6b3e9160b511ffb2] <==
	{"level":"info","ts":"2024-08-02T19:08:51.058928Z","caller":"traceutil/trace.go:171","msg":"trace[1460115195] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1539; }","duration":"129.502045ms","start":"2024-08-02T19:08:50.929417Z","end":"2024-08-02T19:08:51.058919Z","steps":["trace[1460115195] 'agreement among raft nodes before linearized reading'  (duration: 129.434746ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:08:51.451889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.967905ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8184044464772251801 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:719391146cc08098>","response":"size:39"}
	{"level":"info","ts":"2024-08-02T19:08:51.451995Z","caller":"traceutil/trace.go:171","msg":"trace[675612806] linearizableReadLoop","detail":"{readStateIndex:1821; appliedIndex:1820; }","duration":"391.223208ms","start":"2024-08-02T19:08:51.060757Z","end":"2024-08-02T19:08:51.45198Z","steps":["trace[675612806] 'read index received'  (duration: 203.06701ms)","trace[675612806] 'applied index is now lower than readState.Index'  (duration: 188.153955ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T19:08:51.452056Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"391.286296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T19:08:51.45208Z","caller":"traceutil/trace.go:171","msg":"trace[1365941717] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1539; }","duration":"391.336043ms","start":"2024-08-02T19:08:51.060735Z","end":"2024-08-02T19:08:51.452071Z","steps":["trace[1365941717] 'agreement among raft nodes before linearized reading'  (duration: 391.281304ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:08:51.452137Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T19:08:51.060725Z","time spent":"391.389524ms","remote":"127.0.0.1:51434","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-02T19:08:51.452408Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T19:08:51.060504Z","time spent":"391.900987ms","remote":"127.0.0.1:51460","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-08-02T19:08:51.843157Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.307553ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T19:08:51.843355Z","caller":"traceutil/trace.go:171","msg":"trace[1707120121] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1540; }","duration":"221.54067ms","start":"2024-08-02T19:08:51.621791Z","end":"2024-08-02T19:08:51.843331Z","steps":["trace[1707120121] 'range keys from in-memory index tree'  (duration: 221.221858ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:08:53.240391Z","caller":"traceutil/trace.go:171","msg":"trace[1557800588] transaction","detail":"{read_only:false; response_revision:1541; number_of_response:1; }","duration":"174.637575ms","start":"2024-08-02T19:08:53.065739Z","end":"2024-08-02T19:08:53.240377Z","steps":["trace[1557800588] 'process raft request'  (duration: 174.484587ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:08:53.332828Z","caller":"traceutil/trace.go:171","msg":"trace[1667401236] transaction","detail":"{read_only:false; response_revision:1542; number_of_response:1; }","duration":"112.062275ms","start":"2024-08-02T19:08:53.220744Z","end":"2024-08-02T19:08:53.332807Z","steps":["trace[1667401236] 'process raft request'  (duration: 50.018108ms)","trace[1667401236] 'compare'  (duration: 61.947494ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T19:09:18.515883Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1320}
	{"level":"info","ts":"2024-08-02T19:09:18.519432Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1320,"took":"3.242427ms","hash":3488533385,"current-db-size-bytes":2625536,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1519616,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-08-02T19:09:18.519486Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3488533385,"revision":1320,"compact-revision":1076}
	{"level":"warn","ts":"2024-08-02T19:09:51.575855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.47439ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8184044464772252091 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.183\" mod_revision:1581 > success:<request_put:<key:\"/registry/masterleases/192.168.61.183\" value_size:67 lease:8184044464772252089 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.183\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-02T19:09:51.576386Z","caller":"traceutil/trace.go:171","msg":"trace[665496308] transaction","detail":"{read_only:false; response_revision:1589; number_of_response:1; }","duration":"439.827409ms","start":"2024-08-02T19:09:51.136493Z","end":"2024-08-02T19:09:51.57632Z","steps":["trace[665496308] 'process raft request'  (duration: 202.158153ms)","trace[665496308] 'compare'  (duration: 236.295773ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T19:09:51.576487Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T19:09:51.136476Z","time spent":"439.954129ms","remote":"127.0.0.1:51460","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.183\" mod_revision:1581 > success:<request_put:<key:\"/registry/masterleases/192.168.61.183\" value_size:67 lease:8184044464772252089 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.183\" > >"}
	{"level":"warn","ts":"2024-08-02T19:09:51.833009Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.652055ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8184044464772252095 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1588 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-02T19:09:51.83313Z","caller":"traceutil/trace.go:171","msg":"trace[1377887776] linearizableReadLoop","detail":"{readStateIndex:1886; appliedIndex:1885; }","duration":"251.696979ms","start":"2024-08-02T19:09:51.581414Z","end":"2024-08-02T19:09:51.833111Z","steps":["trace[1377887776] 'read index received'  (duration: 122.824534ms)","trace[1377887776] 'applied index is now lower than readState.Index'  (duration: 128.871381ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T19:09:51.833431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.983284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" ","response":"range_response_count:1 size:481"}
	{"level":"info","ts":"2024-08-02T19:09:51.833495Z","caller":"traceutil/trace.go:171","msg":"trace[588700730] transaction","detail":"{read_only:false; response_revision:1590; number_of_response:1; }","duration":"252.99022ms","start":"2024-08-02T19:09:51.580495Z","end":"2024-08-02T19:09:51.833486Z","steps":["trace[588700730] 'process raft request'  (duration: 123.801842ms)","trace[588700730] 'compare'  (duration: 128.50319ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T19:09:51.833523Z","caller":"traceutil/trace.go:171","msg":"trace[1104436441] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1590; }","duration":"252.123993ms","start":"2024-08-02T19:09:51.581385Z","end":"2024-08-02T19:09:51.833509Z","steps":["trace[1104436441] 'agreement among raft nodes before linearized reading'  (duration: 251.868108ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:09:51.833454Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.836607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T19:09:51.834499Z","caller":"traceutil/trace.go:171","msg":"trace[1561204640] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1590; }","duration":"204.90093ms","start":"2024-08-02T19:09:51.629574Z","end":"2024-08-02T19:09:51.834475Z","steps":["trace[1561204640] 'agreement among raft nodes before linearized reading'  (duration: 203.850405ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:09:56.258052Z","caller":"traceutil/trace.go:171","msg":"trace[896737608] transaction","detail":"{read_only:false; response_revision:1594; number_of_response:1; }","duration":"112.93936ms","start":"2024-08-02T19:09:56.145087Z","end":"2024-08-02T19:09:56.258027Z","steps":["trace[896737608] 'process raft request'  (duration: 112.82313ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:10:34 up 21 min,  0 users,  load average: 0.70, 0.38, 0.20
	Linux default-k8s-diff-port-504903 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8a3b871f33afdf83833630439428d1277d90afbaa6a2c7823f3480c7848ea02e] <==
	I0802 19:08:51.462548       1 trace.go:236] Trace[651605853]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.61.183,type:*v1.Endpoints,resource:apiServerIPInfo (02-Aug-2024 19:08:50.944) (total time: 518ms):
	Trace[651605853]: ---"initial value restored" 114ms (19:08:51.059)
	Trace[651605853]: ---"Transaction prepared" 393ms (19:08:51.452)
	Trace[651605853]: [518.035861ms] [518.035861ms] END
	W0802 19:09:19.724056       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:09:19.724152       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0802 19:09:20.725357       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:09:20.725403       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 19:09:20.725414       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:09:20.725572       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:09:20.725687       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 19:09:20.727016       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0802 19:09:51.576984       1 trace.go:236] Trace[800338435]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.61.183,type:*v1.Endpoints,resource:apiServerIPInfo (02-Aug-2024 19:09:50.949) (total time: 627ms):
	Trace[800338435]: ---"Transaction prepared" 183ms (19:09:51.135)
	Trace[800338435]: ---"Txn call completed" 440ms (19:09:51.576)
	Trace[800338435]: [627.554659ms] [627.554659ms] END
	W0802 19:10:20.726302       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:10:20.726415       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 19:10:20.726429       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:10:20.727399       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:10:20.727513       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 19:10:20.727550       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [071fbeaa4252c36bc433759764d2b31fdf184811455485c16dce8eec63263537] <==
	I0802 19:05:27.921659       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="450.86µs"
	E0802 19:05:32.685295       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:05:33.250996       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0802 19:05:40.924852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="152.783µs"
	E0802 19:06:02.689960       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:06:03.260704       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:06:32.695661       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:06:33.271032       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:07:02.701425       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:07:03.279774       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:07:32.706573       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:07:33.287117       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:08:02.714472       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:08:03.297550       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:08:32.720370       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:08:33.314757       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:09:02.725820       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:09:03.321780       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:09:32.730640       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:09:33.329376       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:10:02.736129       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:10:03.338573       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0802 19:10:29.926765       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="1.174997ms"
	E0802 19:10:32.743467       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:10:33.356233       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1d9090ed318c1c60d350f923d25db10bce0c8c36bbd2209d04cafd353cce67e7] <==
	I0802 18:49:20.456542       1 server_linux.go:69] "Using iptables proxy"
	I0802 18:49:20.476986       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.183"]
	I0802 18:49:20.506935       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 18:49:20.506991       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 18:49:20.507045       1 server_linux.go:165] "Using iptables Proxier"
	I0802 18:49:20.509314       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 18:49:20.509576       1 server.go:872] "Version info" version="v1.30.3"
	I0802 18:49:20.509599       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:49:20.510982       1 config.go:192] "Starting service config controller"
	I0802 18:49:20.511018       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 18:49:20.511078       1 config.go:101] "Starting endpoint slice config controller"
	I0802 18:49:20.511103       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 18:49:20.511698       1 config.go:319] "Starting node config controller"
	I0802 18:49:20.511719       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 18:49:20.612150       1 shared_informer.go:320] Caches are synced for node config
	I0802 18:49:20.612236       1 shared_informer.go:320] Caches are synced for service config
	I0802 18:49:20.612256       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [54fef22170bccce8cbbe2f21c3857b13d0679e863ad490238c25659e8cd61194] <==
	I0802 18:49:17.897047       1 serving.go:380] Generated self-signed cert in-memory
	W0802 18:49:19.669727       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0802 18:49:19.669804       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0802 18:49:19.669832       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0802 18:49:19.669855       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0802 18:49:19.712907       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0802 18:49:19.715248       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 18:49:19.716882       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0802 18:49:19.717318       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0802 18:49:19.720017       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0802 18:49:19.717341       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0802 18:49:19.820487       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 19:08:14 default-k8s-diff-port-504903 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:08:19 default-k8s-diff-port-504903 kubelet[928]: E0802 19:08:19.908108     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:08:32 default-k8s-diff-port-504903 kubelet[928]: E0802 19:08:32.904979     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:08:43 default-k8s-diff-port-504903 kubelet[928]: E0802 19:08:43.905866     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:08:58 default-k8s-diff-port-504903 kubelet[928]: E0802 19:08:58.906171     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:09:11 default-k8s-diff-port-504903 kubelet[928]: E0802 19:09:11.905571     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:09:14 default-k8s-diff-port-504903 kubelet[928]: E0802 19:09:14.921820     928 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 19:09:14 default-k8s-diff-port-504903 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 19:09:14 default-k8s-diff-port-504903 kubelet[928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 19:09:14 default-k8s-diff-port-504903 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 19:09:14 default-k8s-diff-port-504903 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:09:26 default-k8s-diff-port-504903 kubelet[928]: E0802 19:09:26.905807     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:09:41 default-k8s-diff-port-504903 kubelet[928]: E0802 19:09:41.905524     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:09:52 default-k8s-diff-port-504903 kubelet[928]: E0802 19:09:52.905279     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:10:04 default-k8s-diff-port-504903 kubelet[928]: E0802 19:10:04.905451     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:10:14 default-k8s-diff-port-504903 kubelet[928]: E0802 19:10:14.922648     928 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 19:10:14 default-k8s-diff-port-504903 kubelet[928]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 19:10:14 default-k8s-diff-port-504903 kubelet[928]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 19:10:14 default-k8s-diff-port-504903 kubelet[928]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 19:10:14 default-k8s-diff-port-504903 kubelet[928]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:10:17 default-k8s-diff-port-504903 kubelet[928]: E0802 19:10:17.922434     928 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 02 19:10:17 default-k8s-diff-port-504903 kubelet[928]: E0802 19:10:17.922534     928 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 02 19:10:17 default-k8s-diff-port-504903 kubelet[928]: E0802 19:10:17.922828     928 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsv4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdin
Once:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-pw5tt_kube-system(35b4be07-d078-4cf8-80b9-15109421de2f): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Aug 02 19:10:17 default-k8s-diff-port-504903 kubelet[928]: E0802 19:10:17.922921     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	Aug 02 19:10:29 default-k8s-diff-port-504903 kubelet[928]: E0802 19:10:29.905796     928 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pw5tt" podUID="35b4be07-d078-4cf8-80b9-15109421de2f"
	
	
	==> storage-provisioner [98515615127ff0a1a90381d1a238540b1929d298f4caf66692b3949cef1fda31] <==
	I0802 18:49:51.191818       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 18:49:51.203852       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 18:49:51.204026       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 18:50:08.603559       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 18:50:08.603868       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-504903_ea1fce85-6406-46cf-a5bd-3ce2babaf85a!
	I0802 18:50:08.607894       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6e1ecbda-1987-4cdb-b2df-9966436f5718", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-504903_ea1fce85-6406-46cf-a5bd-3ce2babaf85a became leader
	I0802 18:50:08.704172       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-504903_ea1fce85-6406-46cf-a5bd-3ce2babaf85a!
	
	
	==> storage-provisioner [ead1da5f29baa20b541ab5bcdbe966c3ec0c229d7da11b5030d116076811c462] <==
	I0802 18:49:20.438285       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0802 18:49:50.441720       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-504903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-pw5tt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-504903 describe pod metrics-server-569cc877fc-pw5tt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-504903 describe pod metrics-server-569cc877fc-pw5tt: exit status 1 (63.898018ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-pw5tt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-504903 describe pod metrics-server-569cc877fc-pw5tt: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (466.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-757654 -n embed-certs-757654
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-02 19:14:32.773984954 +0000 UTC m=+6493.892152574
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757654 -n embed-certs-757654
E0802 19:14:32.999616   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:14:33.004872   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
E0802 19:14:33.014959   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-757654 logs -n 25
E0802 19:14:33.035320   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:14:33.076010   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:14:33.156107   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:14:33.316587   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:14:33.637143   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-757654 logs -n 25: (1.200152853s)
E0802 19:14:34.277647   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-800809 sudo iptables                       | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo docker                         | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo find                           | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo crio                           | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-800809                                     | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 19:11:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 19:11:48.992549   75193 out.go:291] Setting OutFile to fd 1 ...
	I0802 19:11:48.992698   75193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:11:48.992710   75193 out.go:304] Setting ErrFile to fd 2...
	I0802 19:11:48.992718   75193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:11:48.992987   75193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 19:11:48.993749   75193 out.go:298] Setting JSON to false
	I0802 19:11:48.995374   75193 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6853,"bootTime":1722619056,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 19:11:48.995459   75193 start.go:139] virtualization: kvm guest
	I0802 19:11:48.997722   75193 out.go:177] * [bridge-800809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 19:11:48.999182   75193 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 19:11:48.999201   75193 notify.go:220] Checking for updates...
	I0802 19:11:49.001741   75193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 19:11:49.003065   75193 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:11:49.004367   75193 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:11:49.005495   75193 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 19:11:49.006542   75193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 19:11:49.008196   75193 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:11:49.008328   75193 config.go:182] Loaded profile config "enable-default-cni-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:11:49.008486   75193 config.go:182] Loaded profile config "flannel-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:11:49.008604   75193 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 19:11:49.048702   75193 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 19:11:49.050024   75193 start.go:297] selected driver: kvm2
	I0802 19:11:49.050039   75193 start.go:901] validating driver "kvm2" against <nil>
	I0802 19:11:49.050056   75193 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 19:11:49.050792   75193 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:11:49.050892   75193 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 19:11:49.068001   75193 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 19:11:49.068065   75193 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 19:11:49.068314   75193 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:11:49.068384   75193 cni.go:84] Creating CNI manager for "bridge"
	I0802 19:11:49.068399   75193 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 19:11:49.068479   75193 start.go:340] cluster config:
	{Name:bridge-800809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:11:49.068594   75193 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:11:49.071081   75193 out.go:177] * Starting "bridge-800809" primary control-plane node in "bridge-800809" cluster
	I0802 19:11:49.072198   75193 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:11:49.072237   75193 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 19:11:49.072249   75193 cache.go:56] Caching tarball of preloaded images
	I0802 19:11:49.072353   75193 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 19:11:49.072368   75193 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 19:11:49.072479   75193 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/config.json ...
	I0802 19:11:49.072498   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/config.json: {Name:mka48f260b1295818e6d1cbbba5525ad1155665e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:49.072633   75193 start.go:360] acquireMachinesLock for bridge-800809: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 19:11:50.415756   75193 start.go:364] duration metric: took 1.343099886s to acquireMachinesLock for "bridge-800809"
	I0802 19:11:50.415841   75193 start.go:93] Provisioning new machine with config: &{Name:bridge-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:bridge-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:11:50.415956   75193 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 19:11:48.505148   73373 main.go:141] libmachine: (flannel-800809) DBG | Getting to WaitForSSH function...
	I0802 19:11:48.740982   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.741432   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:48.741476   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.741649   73373 main.go:141] libmachine: (flannel-800809) DBG | Using SSH client type: external
	I0802 19:11:48.741674   73373 main.go:141] libmachine: (flannel-800809) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa (-rw-------)
	I0802 19:11:48.741728   73373 main.go:141] libmachine: (flannel-800809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 19:11:48.741750   73373 main.go:141] libmachine: (flannel-800809) DBG | About to run SSH command:
	I0802 19:11:48.741767   73373 main.go:141] libmachine: (flannel-800809) DBG | exit 0
	I0802 19:11:48.871940   73373 main.go:141] libmachine: (flannel-800809) DBG | SSH cmd err, output: <nil>: 
	I0802 19:11:48.872213   73373 main.go:141] libmachine: (flannel-800809) KVM machine creation complete!
	I0802 19:11:48.872555   73373 main.go:141] libmachine: (flannel-800809) Calling .GetConfigRaw
	I0802 19:11:48.873126   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:48.873328   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:48.873502   73373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 19:11:48.873516   73373 main.go:141] libmachine: (flannel-800809) Calling .GetState
	I0802 19:11:48.874786   73373 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 19:11:48.874798   73373 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 19:11:48.874804   73373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 19:11:48.874812   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:48.877792   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.878157   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:48.878200   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.878321   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:48.878492   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:48.878652   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:48.878773   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:48.878955   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:48.879218   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:48.879237   73373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 19:11:48.978740   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:11:48.978776   73373 main.go:141] libmachine: Detecting the provisioner...
	I0802 19:11:48.978786   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:48.981420   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.982001   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:48.982023   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.982464   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:48.982673   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:48.982853   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:48.983042   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:48.983246   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:48.983413   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:48.983423   73373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 19:11:49.087531   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 19:11:49.087584   73373 main.go:141] libmachine: found compatible host: buildroot
	I0802 19:11:49.087594   73373 main.go:141] libmachine: Provisioning with buildroot...
	I0802 19:11:49.087601   73373 main.go:141] libmachine: (flannel-800809) Calling .GetMachineName
	I0802 19:11:49.087856   73373 buildroot.go:166] provisioning hostname "flannel-800809"
	I0802 19:11:49.087882   73373 main.go:141] libmachine: (flannel-800809) Calling .GetMachineName
	I0802 19:11:49.088025   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:49.091214   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.091587   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.091607   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.091769   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:49.091938   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.092107   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.092304   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:49.092463   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:49.092662   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:49.092677   73373 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-800809 && echo "flannel-800809" | sudo tee /etc/hostname
	I0802 19:11:49.211080   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-800809
	
	I0802 19:11:49.211128   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:49.214099   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.214556   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.214588   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.214776   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:49.214965   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.215157   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.215332   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:49.215492   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:49.215722   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:49.215739   73373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-800809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-800809/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-800809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 19:11:49.328159   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:11:49.328206   73373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 19:11:49.328228   73373 buildroot.go:174] setting up certificates
	I0802 19:11:49.328243   73373 provision.go:84] configureAuth start
	I0802 19:11:49.328261   73373 main.go:141] libmachine: (flannel-800809) Calling .GetMachineName
	I0802 19:11:49.328548   73373 main.go:141] libmachine: (flannel-800809) Calling .GetIP
	I0802 19:11:49.332031   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.332412   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.332440   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.332718   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:49.335690   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.336088   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.336119   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.336221   73373 provision.go:143] copyHostCerts
	I0802 19:11:49.336302   73373 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 19:11:49.336313   73373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 19:11:49.336382   73373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 19:11:49.336489   73373 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 19:11:49.336501   73373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 19:11:49.336551   73373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 19:11:49.336647   73373 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 19:11:49.336658   73373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 19:11:49.336702   73373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 19:11:49.336819   73373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.flannel-800809 san=[127.0.0.1 192.168.50.5 flannel-800809 localhost minikube]
	I0802 19:11:49.754782   73373 provision.go:177] copyRemoteCerts
	I0802 19:11:49.754836   73373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 19:11:49.754858   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:49.757834   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.758205   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.758235   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.758393   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:49.758596   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.758819   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:49.759000   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:11:49.841096   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 19:11:49.864112   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0802 19:11:49.885998   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 19:11:49.909204   73373 provision.go:87] duration metric: took 580.942306ms to configureAuth
	I0802 19:11:49.909230   73373 buildroot.go:189] setting minikube options for container-runtime
	I0802 19:11:49.909377   73373 config.go:182] Loaded profile config "flannel-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:11:49.909440   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:49.912361   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.912729   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.912750   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.912936   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:49.913138   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.913285   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.913419   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:49.913567   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:49.913721   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:49.913734   73373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 19:11:50.185279   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 19:11:50.185307   73373 main.go:141] libmachine: Checking connection to Docker...
	I0802 19:11:50.185318   73373 main.go:141] libmachine: (flannel-800809) Calling .GetURL
	I0802 19:11:50.186620   73373 main.go:141] libmachine: (flannel-800809) DBG | Using libvirt version 6000000
	I0802 19:11:50.189062   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.189413   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.189448   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.189594   73373 main.go:141] libmachine: Docker is up and running!
	I0802 19:11:50.189609   73373 main.go:141] libmachine: Reticulating splines...
	I0802 19:11:50.189617   73373 client.go:171] duration metric: took 28.346480007s to LocalClient.Create
	I0802 19:11:50.189640   73373 start.go:167] duration metric: took 28.346547758s to libmachine.API.Create "flannel-800809"
	I0802 19:11:50.189651   73373 start.go:293] postStartSetup for "flannel-800809" (driver="kvm2")
	I0802 19:11:50.189664   73373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 19:11:50.189696   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:50.189921   73373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 19:11:50.189946   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:50.192114   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.192542   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.192572   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.192752   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:50.192938   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:50.193097   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:50.193227   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:11:50.273672   73373 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 19:11:50.277719   73373 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 19:11:50.277738   73373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 19:11:50.277796   73373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 19:11:50.277884   73373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 19:11:50.277977   73373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 19:11:50.286547   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:11:50.309197   73373 start.go:296] duration metric: took 119.533267ms for postStartSetup
	I0802 19:11:50.309250   73373 main.go:141] libmachine: (flannel-800809) Calling .GetConfigRaw
	I0802 19:11:50.309820   73373 main.go:141] libmachine: (flannel-800809) Calling .GetIP
	I0802 19:11:50.312551   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.312939   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.312966   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.313253   73373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/config.json ...
	I0802 19:11:50.313480   73373 start.go:128] duration metric: took 28.490461799s to createHost
	I0802 19:11:50.313505   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:50.315973   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.316275   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.316297   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.316444   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:50.316587   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:50.316723   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:50.316880   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:50.317045   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:50.317236   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:50.317251   73373 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 19:11:50.415595   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722625910.396656919
	
	I0802 19:11:50.415615   73373 fix.go:216] guest clock: 1722625910.396656919
	I0802 19:11:50.415623   73373 fix.go:229] Guest: 2024-08-02 19:11:50.396656919 +0000 UTC Remote: 2024-08-02 19:11:50.313494702 +0000 UTC m=+28.607265002 (delta=83.162217ms)
	I0802 19:11:50.415664   73373 fix.go:200] guest clock delta is within tolerance: 83.162217ms
	I0802 19:11:50.415674   73373 start.go:83] releasing machines lock for "flannel-800809", held for 28.592726479s
	I0802 19:11:50.415703   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:50.415948   73373 main.go:141] libmachine: (flannel-800809) Calling .GetIP
	I0802 19:11:50.418650   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.419014   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.419037   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.419202   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:50.419647   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:50.419805   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:50.419884   73373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 19:11:50.419927   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:50.419986   73373 ssh_runner.go:195] Run: cat /version.json
	I0802 19:11:50.420015   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:50.422581   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.422928   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.422962   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.422986   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.423148   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:50.423341   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:50.423490   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.423498   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:50.423509   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.423626   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:11:50.423684   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:50.423837   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:50.423974   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:50.424137   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:11:50.533512   73373 ssh_runner.go:195] Run: systemctl --version
	I0802 19:11:50.539979   73373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 19:11:50.706624   73373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 19:11:50.712096   73373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 19:11:50.712167   73373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 19:11:50.727356   73373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 19:11:50.727376   73373 start.go:495] detecting cgroup driver to use...
	I0802 19:11:50.727442   73373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 19:11:50.747572   73373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 19:11:50.761645   73373 docker.go:217] disabling cri-docker service (if available) ...
	I0802 19:11:50.761702   73373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 19:11:50.775483   73373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 19:11:50.788429   73373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 19:11:50.905227   73373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 19:11:51.076329   73373 docker.go:233] disabling docker service ...
	I0802 19:11:51.076406   73373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 19:11:51.091256   73373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 19:11:51.104035   73373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 19:11:51.224792   73373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 19:11:51.341027   73373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 19:11:51.355045   73373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 19:11:51.372582   73373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 19:11:51.372641   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.382224   73373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 19:11:51.382288   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.392268   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.402012   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.411898   73373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 19:11:51.422311   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.432945   73373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.450825   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.460500   73373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 19:11:51.470102   73373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 19:11:51.470167   73373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 19:11:51.483295   73373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 19:11:51.492497   73373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:11:51.604098   73373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 19:11:51.756727   73373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 19:11:51.756799   73373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 19:11:51.761534   73373 start.go:563] Will wait 60s for crictl version
	I0802 19:11:51.761594   73373 ssh_runner.go:195] Run: which crictl
	I0802 19:11:51.764994   73373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 19:11:51.806688   73373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 19:11:51.806766   73373 ssh_runner.go:195] Run: crio --version
	I0802 19:11:51.846603   73373 ssh_runner.go:195] Run: crio --version
	I0802 19:11:51.877815   73373 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 19:11:50.418096   75193 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 19:11:50.418330   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:11:50.418396   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:11:50.436086   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0802 19:11:50.436509   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:11:50.437139   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:11:50.437166   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:11:50.437596   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:11:50.437836   75193 main.go:141] libmachine: (bridge-800809) Calling .GetMachineName
	I0802 19:11:50.438023   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:11:50.438221   75193 start.go:159] libmachine.API.Create for "bridge-800809" (driver="kvm2")
	I0802 19:11:50.438251   75193 client.go:168] LocalClient.Create starting
	I0802 19:11:50.438282   75193 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 19:11:50.438323   75193 main.go:141] libmachine: Decoding PEM data...
	I0802 19:11:50.438342   75193 main.go:141] libmachine: Parsing certificate...
	I0802 19:11:50.438428   75193 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 19:11:50.438460   75193 main.go:141] libmachine: Decoding PEM data...
	I0802 19:11:50.438482   75193 main.go:141] libmachine: Parsing certificate...
	I0802 19:11:50.438513   75193 main.go:141] libmachine: Running pre-create checks...
	I0802 19:11:50.438526   75193 main.go:141] libmachine: (bridge-800809) Calling .PreCreateCheck
	I0802 19:11:50.438949   75193 main.go:141] libmachine: (bridge-800809) Calling .GetConfigRaw
	I0802 19:11:50.439422   75193 main.go:141] libmachine: Creating machine...
	I0802 19:11:50.439441   75193 main.go:141] libmachine: (bridge-800809) Calling .Create
	I0802 19:11:50.439584   75193 main.go:141] libmachine: (bridge-800809) Creating KVM machine...
	I0802 19:11:50.440897   75193 main.go:141] libmachine: (bridge-800809) DBG | found existing default KVM network
	I0802 19:11:50.442638   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:50.442487   75282 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000270100}
	I0802 19:11:50.442665   75193 main.go:141] libmachine: (bridge-800809) DBG | created network xml: 
	I0802 19:11:50.442679   75193 main.go:141] libmachine: (bridge-800809) DBG | <network>
	I0802 19:11:50.442688   75193 main.go:141] libmachine: (bridge-800809) DBG |   <name>mk-bridge-800809</name>
	I0802 19:11:50.442699   75193 main.go:141] libmachine: (bridge-800809) DBG |   <dns enable='no'/>
	I0802 19:11:50.442711   75193 main.go:141] libmachine: (bridge-800809) DBG |   
	I0802 19:11:50.442722   75193 main.go:141] libmachine: (bridge-800809) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0802 19:11:50.442736   75193 main.go:141] libmachine: (bridge-800809) DBG |     <dhcp>
	I0802 19:11:50.442750   75193 main.go:141] libmachine: (bridge-800809) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0802 19:11:50.442764   75193 main.go:141] libmachine: (bridge-800809) DBG |     </dhcp>
	I0802 19:11:50.442777   75193 main.go:141] libmachine: (bridge-800809) DBG |   </ip>
	I0802 19:11:50.442852   75193 main.go:141] libmachine: (bridge-800809) DBG |   
	I0802 19:11:50.442876   75193 main.go:141] libmachine: (bridge-800809) DBG | </network>
	I0802 19:11:50.442893   75193 main.go:141] libmachine: (bridge-800809) DBG | 
	I0802 19:11:50.448692   75193 main.go:141] libmachine: (bridge-800809) DBG | trying to create private KVM network mk-bridge-800809 192.168.39.0/24...
	I0802 19:11:50.526128   75193 main.go:141] libmachine: (bridge-800809) DBG | private KVM network mk-bridge-800809 192.168.39.0/24 created
	I0802 19:11:50.526180   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:50.526103   75282 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:11:50.526226   75193 main.go:141] libmachine: (bridge-800809) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809 ...
	I0802 19:11:50.526296   75193 main.go:141] libmachine: (bridge-800809) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 19:11:50.526397   75193 main.go:141] libmachine: (bridge-800809) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 19:11:50.782637   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:50.782507   75282 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa...
	I0802 19:11:50.989227   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:50.989067   75282 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/bridge-800809.rawdisk...
	I0802 19:11:50.989265   75193 main.go:141] libmachine: (bridge-800809) DBG | Writing magic tar header
	I0802 19:11:50.989349   75193 main.go:141] libmachine: (bridge-800809) DBG | Writing SSH key tar header
	I0802 19:11:50.989388   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809 (perms=drwx------)
	I0802 19:11:50.989414   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:50.989194   75282 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809 ...
	I0802 19:11:50.989426   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 19:11:50.989444   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 19:11:50.989457   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 19:11:50.989474   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 19:11:50.989493   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809
	I0802 19:11:50.989506   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 19:11:50.989520   75193 main.go:141] libmachine: (bridge-800809) Creating domain...
	I0802 19:11:50.989539   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 19:11:50.989552   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:11:50.989564   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 19:11:50.989579   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 19:11:50.989592   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins
	I0802 19:11:50.989607   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home
	I0802 19:11:50.989621   75193 main.go:141] libmachine: (bridge-800809) DBG | Skipping /home - not owner
	I0802 19:11:50.990738   75193 main.go:141] libmachine: (bridge-800809) define libvirt domain using xml: 
	I0802 19:11:50.990764   75193 main.go:141] libmachine: (bridge-800809) <domain type='kvm'>
	I0802 19:11:50.990795   75193 main.go:141] libmachine: (bridge-800809)   <name>bridge-800809</name>
	I0802 19:11:50.990818   75193 main.go:141] libmachine: (bridge-800809)   <memory unit='MiB'>3072</memory>
	I0802 19:11:50.990832   75193 main.go:141] libmachine: (bridge-800809)   <vcpu>2</vcpu>
	I0802 19:11:50.990842   75193 main.go:141] libmachine: (bridge-800809)   <features>
	I0802 19:11:50.990854   75193 main.go:141] libmachine: (bridge-800809)     <acpi/>
	I0802 19:11:50.990863   75193 main.go:141] libmachine: (bridge-800809)     <apic/>
	I0802 19:11:50.990874   75193 main.go:141] libmachine: (bridge-800809)     <pae/>
	I0802 19:11:50.990890   75193 main.go:141] libmachine: (bridge-800809)     
	I0802 19:11:50.990900   75193 main.go:141] libmachine: (bridge-800809)   </features>
	I0802 19:11:50.990907   75193 main.go:141] libmachine: (bridge-800809)   <cpu mode='host-passthrough'>
	I0802 19:11:50.990915   75193 main.go:141] libmachine: (bridge-800809)   
	I0802 19:11:50.990921   75193 main.go:141] libmachine: (bridge-800809)   </cpu>
	I0802 19:11:50.990929   75193 main.go:141] libmachine: (bridge-800809)   <os>
	I0802 19:11:50.990951   75193 main.go:141] libmachine: (bridge-800809)     <type>hvm</type>
	I0802 19:11:50.990964   75193 main.go:141] libmachine: (bridge-800809)     <boot dev='cdrom'/>
	I0802 19:11:50.990977   75193 main.go:141] libmachine: (bridge-800809)     <boot dev='hd'/>
	I0802 19:11:50.990989   75193 main.go:141] libmachine: (bridge-800809)     <bootmenu enable='no'/>
	I0802 19:11:50.990998   75193 main.go:141] libmachine: (bridge-800809)   </os>
	I0802 19:11:50.991004   75193 main.go:141] libmachine: (bridge-800809)   <devices>
	I0802 19:11:50.991014   75193 main.go:141] libmachine: (bridge-800809)     <disk type='file' device='cdrom'>
	I0802 19:11:50.991025   75193 main.go:141] libmachine: (bridge-800809)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/boot2docker.iso'/>
	I0802 19:11:50.991035   75193 main.go:141] libmachine: (bridge-800809)       <target dev='hdc' bus='scsi'/>
	I0802 19:11:50.991043   75193 main.go:141] libmachine: (bridge-800809)       <readonly/>
	I0802 19:11:50.991052   75193 main.go:141] libmachine: (bridge-800809)     </disk>
	I0802 19:11:50.991061   75193 main.go:141] libmachine: (bridge-800809)     <disk type='file' device='disk'>
	I0802 19:11:50.991072   75193 main.go:141] libmachine: (bridge-800809)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 19:11:50.991087   75193 main.go:141] libmachine: (bridge-800809)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/bridge-800809.rawdisk'/>
	I0802 19:11:50.991097   75193 main.go:141] libmachine: (bridge-800809)       <target dev='hda' bus='virtio'/>
	I0802 19:11:50.991123   75193 main.go:141] libmachine: (bridge-800809)     </disk>
	I0802 19:11:50.991135   75193 main.go:141] libmachine: (bridge-800809)     <interface type='network'>
	I0802 19:11:50.991148   75193 main.go:141] libmachine: (bridge-800809)       <source network='mk-bridge-800809'/>
	I0802 19:11:50.991158   75193 main.go:141] libmachine: (bridge-800809)       <model type='virtio'/>
	I0802 19:11:50.991168   75193 main.go:141] libmachine: (bridge-800809)     </interface>
	I0802 19:11:50.991178   75193 main.go:141] libmachine: (bridge-800809)     <interface type='network'>
	I0802 19:11:50.991191   75193 main.go:141] libmachine: (bridge-800809)       <source network='default'/>
	I0802 19:11:50.991201   75193 main.go:141] libmachine: (bridge-800809)       <model type='virtio'/>
	I0802 19:11:50.991209   75193 main.go:141] libmachine: (bridge-800809)     </interface>
	I0802 19:11:50.991226   75193 main.go:141] libmachine: (bridge-800809)     <serial type='pty'>
	I0802 19:11:50.991258   75193 main.go:141] libmachine: (bridge-800809)       <target port='0'/>
	I0802 19:11:50.991280   75193 main.go:141] libmachine: (bridge-800809)     </serial>
	I0802 19:11:50.991291   75193 main.go:141] libmachine: (bridge-800809)     <console type='pty'>
	I0802 19:11:50.991302   75193 main.go:141] libmachine: (bridge-800809)       <target type='serial' port='0'/>
	I0802 19:11:50.991313   75193 main.go:141] libmachine: (bridge-800809)     </console>
	I0802 19:11:50.991325   75193 main.go:141] libmachine: (bridge-800809)     <rng model='virtio'>
	I0802 19:11:50.991335   75193 main.go:141] libmachine: (bridge-800809)       <backend model='random'>/dev/random</backend>
	I0802 19:11:50.991349   75193 main.go:141] libmachine: (bridge-800809)     </rng>
	I0802 19:11:50.991363   75193 main.go:141] libmachine: (bridge-800809)     
	I0802 19:11:50.991386   75193 main.go:141] libmachine: (bridge-800809)     
	I0802 19:11:50.991395   75193 main.go:141] libmachine: (bridge-800809)   </devices>
	I0802 19:11:50.991400   75193 main.go:141] libmachine: (bridge-800809) </domain>
	I0802 19:11:50.991410   75193 main.go:141] libmachine: (bridge-800809) 
	I0802 19:11:50.996626   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:37:ac:11 in network default
	I0802 19:11:50.997257   75193 main.go:141] libmachine: (bridge-800809) Ensuring networks are active...
	I0802 19:11:50.997278   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:50.998158   75193 main.go:141] libmachine: (bridge-800809) Ensuring network default is active
	I0802 19:11:50.998535   75193 main.go:141] libmachine: (bridge-800809) Ensuring network mk-bridge-800809 is active
	I0802 19:11:50.999134   75193 main.go:141] libmachine: (bridge-800809) Getting domain xml...
	I0802 19:11:50.999961   75193 main.go:141] libmachine: (bridge-800809) Creating domain...
	I0802 19:11:52.379816   75193 main.go:141] libmachine: (bridge-800809) Waiting to get IP...
	I0802 19:11:52.381036   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:52.381666   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:52.381725   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:52.381644   75282 retry.go:31] will retry after 248.454118ms: waiting for machine to come up
	I0802 19:11:52.632358   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:52.632962   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:52.632984   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:52.632924   75282 retry.go:31] will retry after 331.963102ms: waiting for machine to come up
	I0802 19:11:52.966675   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:52.967280   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:52.967328   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:52.967231   75282 retry.go:31] will retry after 302.105474ms: waiting for machine to come up
	I0802 19:11:53.270669   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:53.271269   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:53.271317   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:53.271216   75282 retry.go:31] will retry after 426.086034ms: waiting for machine to come up
	I0802 19:11:53.698800   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:53.699493   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:53.699522   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:53.699444   75282 retry.go:31] will retry after 739.113839ms: waiting for machine to come up
	I0802 19:11:51.879036   73373 main.go:141] libmachine: (flannel-800809) Calling .GetIP
	I0802 19:11:51.882396   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:51.882931   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:51.882958   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:51.883240   73373 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0802 19:11:51.887474   73373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:11:51.899509   73373 kubeadm.go:883] updating cluster {Name:flannel-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:flannel-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 19:11:51.899651   73373 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:11:51.899712   73373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:11:51.930905   73373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 19:11:51.930990   73373 ssh_runner.go:195] Run: which lz4
	I0802 19:11:51.934836   73373 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 19:11:51.938936   73373 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 19:11:51.938969   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 19:11:53.236069   73373 crio.go:462] duration metric: took 1.301263129s to copy over tarball
	I0802 19:11:53.236155   73373 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 19:11:55.689790   73373 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.453606792s)
	I0802 19:11:55.689815   73373 crio.go:469] duration metric: took 2.453709131s to extract the tarball
	I0802 19:11:55.689824   73373 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 19:11:55.741095   73373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:11:55.790173   73373 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 19:11:55.790194   73373 cache_images.go:84] Images are preloaded, skipping loading
	I0802 19:11:55.790204   73373 kubeadm.go:934] updating node { 192.168.50.5 8443 v1.30.3 crio true true} ...
	I0802 19:11:55.790341   73373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-800809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:flannel-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0802 19:11:55.790434   73373 ssh_runner.go:195] Run: crio config
	I0802 19:11:55.846997   73373 cni.go:84] Creating CNI manager for "flannel"
	I0802 19:11:55.847028   73373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 19:11:55.847061   73373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-800809 NodeName:flannel-800809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 19:11:55.847276   73373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-800809"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 19:11:55.847355   73373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 19:11:55.859933   73373 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 19:11:55.860000   73373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 19:11:55.869006   73373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0802 19:11:55.885489   73373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 19:11:55.902373   73373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0802 19:11:55.919889   73373 ssh_runner.go:195] Run: grep 192.168.50.5	control-plane.minikube.internal$ /etc/hosts
	I0802 19:11:55.923860   73373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:11:55.936457   73373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:11:56.078744   73373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:11:56.098162   73373 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809 for IP: 192.168.50.5
	I0802 19:11:56.098185   73373 certs.go:194] generating shared ca certs ...
	I0802 19:11:56.098207   73373 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:56.098390   73373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 19:11:56.098451   73373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 19:11:56.098464   73373 certs.go:256] generating profile certs ...
	I0802 19:11:56.098560   73373 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.key
	I0802 19:11:56.098585   73373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt with IP's: []
	I0802 19:11:56.825487   73373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt ...
	I0802 19:11:56.825521   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: {Name:mk8798632b721acc602eb532cc80981f8a8eac6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:56.825708   73373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.key ...
	I0802 19:11:56.825722   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.key: {Name:mk887e5f10903f5893b7d910b7823cb576fc4901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:56.825817   73373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key.6e650993
	I0802 19:11:56.825837   73373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt.6e650993 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.5]
	I0802 19:11:57.040275   73373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt.6e650993 ...
	I0802 19:11:57.040301   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt.6e650993: {Name:mk322863c77775a6ddc0c85a55db52704046ff51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:57.040461   73373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key.6e650993 ...
	I0802 19:11:57.040475   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key.6e650993: {Name:mkd172b05439072f3504d2c7474093f97a63f63a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:57.040555   73373 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt.6e650993 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt
	I0802 19:11:57.040652   73373 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key.6e650993 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key
	I0802 19:11:57.040712   73373 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.key
	I0802 19:11:57.040728   73373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.crt with IP's: []
	I0802 19:11:57.374226   73373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.crt ...
	I0802 19:11:57.374253   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.crt: {Name:mk67d9b0bfee7da40f1bc144fab49d9c45f053a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:57.374421   73373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.key ...
	I0802 19:11:57.374435   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.key: {Name:mk6d68fb8dc1fc3d1d498ffeff8a3d201d7e64f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:57.374623   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 19:11:57.374666   73373 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 19:11:57.374681   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 19:11:57.374716   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 19:11:57.374750   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 19:11:57.374782   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 19:11:57.374836   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:11:57.375444   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 19:11:57.404539   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 19:11:57.432838   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 19:11:57.457749   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 19:11:57.481495   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0802 19:11:57.514589   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 19:11:57.552197   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 19:11:57.577361   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 19:11:57.600577   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 19:11:57.623930   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 19:11:57.651787   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 19:11:57.679626   73373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 19:11:57.700087   73373 ssh_runner.go:195] Run: openssl version
	I0802 19:11:57.705923   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 19:11:57.716893   73373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 19:11:57.721638   73373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 19:11:57.721689   73373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 19:11:57.728102   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 19:11:57.744312   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 19:11:57.755979   73373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 19:11:57.761595   73373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 19:11:57.761647   73373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 19:11:57.767747   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 19:11:57.781052   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 19:11:57.792184   73373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:11:57.796625   73373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:11:57.796674   73373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:11:57.805743   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 19:11:57.819999   73373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 19:11:57.825075   73373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 19:11:57.825139   73373 kubeadm.go:392] StartCluster: {Name:flannel-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:flannel-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:11:57.825230   73373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 19:11:57.825310   73373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 19:11:57.863560   73373 cri.go:89] found id: ""
	I0802 19:11:57.863635   73373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 19:11:57.875742   73373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 19:11:57.885379   73373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 19:11:57.895723   73373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 19:11:57.895745   73373 kubeadm.go:157] found existing configuration files:
	
	I0802 19:11:57.895808   73373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 19:11:57.906310   73373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 19:11:57.906376   73373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 19:11:57.919145   73373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 19:11:57.929876   73373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 19:11:57.929941   73373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 19:11:57.940648   73373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 19:11:57.950387   73373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 19:11:57.950445   73373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 19:11:57.960449   73373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 19:11:57.969472   73373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 19:11:57.969531   73373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 19:11:57.978790   73373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 19:11:58.048446   73373 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 19:11:58.048623   73373 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 19:11:58.201701   73373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 19:11:58.201911   73373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 19:11:58.202081   73373 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 19:11:58.460575   73373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 19:11:54.440156   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:54.440733   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:54.440762   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:54.440676   75282 retry.go:31] will retry after 832.997741ms: waiting for machine to come up
	I0802 19:11:55.275698   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:55.276162   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:55.276204   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:55.276130   75282 retry.go:31] will retry after 800.164807ms: waiting for machine to come up
	I0802 19:11:56.077594   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:56.078207   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:56.078241   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:56.078138   75282 retry.go:31] will retry after 952.401705ms: waiting for machine to come up
	I0802 19:11:57.032437   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:57.032961   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:57.032995   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:57.032916   75282 retry.go:31] will retry after 1.176859984s: waiting for machine to come up
	I0802 19:11:58.211447   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:58.211987   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:58.212018   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:58.211947   75282 retry.go:31] will retry after 2.284917552s: waiting for machine to come up
	I0802 19:11:58.585912   73373 out.go:204]   - Generating certificates and keys ...
	I0802 19:11:58.586065   73373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 19:11:58.586173   73373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 19:11:58.726767   73373 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 19:11:58.855822   73373 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 19:11:59.008917   73373 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 19:11:59.332965   73373 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 19:11:59.434770   73373 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 19:11:59.434956   73373 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-800809 localhost] and IPs [192.168.50.5 127.0.0.1 ::1]
	I0802 19:11:59.507754   73373 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 19:11:59.507906   73373 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-800809 localhost] and IPs [192.168.50.5 127.0.0.1 ::1]
	I0802 19:11:59.703753   73373 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 19:12:00.224289   73373 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 19:12:00.375902   73373 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 19:12:00.376036   73373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 19:12:00.537256   73373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 19:12:00.702586   73373 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 19:12:00.894124   73373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 19:12:01.029850   73373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 19:12:01.244850   73373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 19:12:01.245753   73373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 19:12:01.248598   73373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 19:12:01.250534   73373 out.go:204]   - Booting up control plane ...
	I0802 19:12:01.250658   73373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 19:12:01.250758   73373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 19:12:01.253235   73373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 19:12:01.274898   73373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 19:12:01.275530   73373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 19:12:01.275601   73373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 19:12:01.422785   73373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 19:12:01.422892   73373 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 19:12:00.498642   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:00.499198   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:12:00.499233   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:12:00.499130   75282 retry.go:31] will retry after 2.584473334s: waiting for machine to come up
	I0802 19:12:03.085072   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:03.085804   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:12:03.085842   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:12:03.085702   75282 retry.go:31] will retry after 2.321675283s: waiting for machine to come up
	I0802 19:12:02.427300   73373 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004045728s
	I0802 19:12:02.427405   73373 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 19:12:07.926160   73373 kubeadm.go:310] [api-check] The API server is healthy after 5.501769866s
	I0802 19:12:07.943545   73373 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 19:12:07.959330   73373 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 19:12:07.997482   73373 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 19:12:07.997710   73373 kubeadm.go:310] [mark-control-plane] Marking the node flannel-800809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 19:12:08.016986   73373 kubeadm.go:310] [bootstrap-token] Using token: kkupdq.9c0g512l5z6vxhyc
	I0802 19:12:05.724559   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:05.724971   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:12:05.724991   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:12:05.724945   75282 retry.go:31] will retry after 3.413268879s: waiting for machine to come up
	I0802 19:12:08.018257   73373 out.go:204]   - Configuring RBAC rules ...
	I0802 19:12:08.018395   73373 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 19:12:08.025386   73373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 19:12:08.035750   73373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 19:12:08.039019   73373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 19:12:08.044797   73373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 19:12:08.049373   73373 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 19:12:08.332971   73373 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 19:12:08.769985   73373 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 19:12:09.332627   73373 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 19:12:09.333525   73373 kubeadm.go:310] 
	I0802 19:12:09.333606   73373 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 19:12:09.333617   73373 kubeadm.go:310] 
	I0802 19:12:09.333696   73373 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 19:12:09.333717   73373 kubeadm.go:310] 
	I0802 19:12:09.333765   73373 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 19:12:09.333840   73373 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 19:12:09.333894   73373 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 19:12:09.333900   73373 kubeadm.go:310] 
	I0802 19:12:09.333947   73373 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 19:12:09.333953   73373 kubeadm.go:310] 
	I0802 19:12:09.333991   73373 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 19:12:09.333999   73373 kubeadm.go:310] 
	I0802 19:12:09.334041   73373 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 19:12:09.334103   73373 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 19:12:09.334260   73373 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 19:12:09.334281   73373 kubeadm.go:310] 
	I0802 19:12:09.334413   73373 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 19:12:09.334527   73373 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 19:12:09.334540   73373 kubeadm.go:310] 
	I0802 19:12:09.334641   73373 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kkupdq.9c0g512l5z6vxhyc \
	I0802 19:12:09.334737   73373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 19:12:09.334771   73373 kubeadm.go:310] 	--control-plane 
	I0802 19:12:09.334780   73373 kubeadm.go:310] 
	I0802 19:12:09.334877   73373 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 19:12:09.334886   73373 kubeadm.go:310] 
	I0802 19:12:09.334999   73373 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kkupdq.9c0g512l5z6vxhyc \
	I0802 19:12:09.335133   73373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 19:12:09.335349   73373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 19:12:09.335379   73373 cni.go:84] Creating CNI manager for "flannel"
	I0802 19:12:09.337861   73373 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0802 19:12:09.339216   73373 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0802 19:12:09.344611   73373 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0802 19:12:09.344624   73373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0802 19:12:09.362345   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0802 19:12:09.722212   73373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 19:12:09.722308   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:09.722311   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-800809 minikube.k8s.io/updated_at=2024_08_02T19_12_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=flannel-800809 minikube.k8s.io/primary=true
	I0802 19:12:09.914588   73373 ops.go:34] apiserver oom_adj: -16
	I0802 19:12:09.914662   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:10.415313   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:10.914921   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:11.415447   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:09.140426   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:09.140955   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:12:09.140978   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:12:09.140905   75282 retry.go:31] will retry after 4.075349181s: waiting for machine to come up
	I0802 19:12:13.219679   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:13.220314   75193 main.go:141] libmachine: (bridge-800809) Found IP for machine: 192.168.39.217
	I0802 19:12:13.220343   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has current primary IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:13.220353   75193 main.go:141] libmachine: (bridge-800809) Reserving static IP address...
	I0802 19:12:13.220665   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find host DHCP lease matching {name: "bridge-800809", mac: "52:54:00:ca:09:00", ip: "192.168.39.217"} in network mk-bridge-800809
	I0802 19:12:13.297200   75193 main.go:141] libmachine: (bridge-800809) DBG | Getting to WaitForSSH function...
	I0802 19:12:13.297230   75193 main.go:141] libmachine: (bridge-800809) Reserved static IP address: 192.168.39.217
	I0802 19:12:13.297242   75193 main.go:141] libmachine: (bridge-800809) Waiting for SSH to be available...
	I0802 19:12:13.300545   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:13.300884   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809
	I0802 19:12:13.300913   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find defined IP address of network mk-bridge-800809 interface with MAC address 52:54:00:ca:09:00
	I0802 19:12:13.301043   75193 main.go:141] libmachine: (bridge-800809) DBG | Using SSH client type: external
	I0802 19:12:13.301071   75193 main.go:141] libmachine: (bridge-800809) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa (-rw-------)
	I0802 19:12:13.301119   75193 main.go:141] libmachine: (bridge-800809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 19:12:13.301136   75193 main.go:141] libmachine: (bridge-800809) DBG | About to run SSH command:
	I0802 19:12:13.301150   75193 main.go:141] libmachine: (bridge-800809) DBG | exit 0
	I0802 19:12:13.304679   75193 main.go:141] libmachine: (bridge-800809) DBG | SSH cmd err, output: exit status 255: 
	I0802 19:12:13.304705   75193 main.go:141] libmachine: (bridge-800809) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0802 19:12:13.304716   75193 main.go:141] libmachine: (bridge-800809) DBG | command : exit 0
	I0802 19:12:13.304728   75193 main.go:141] libmachine: (bridge-800809) DBG | err     : exit status 255
	I0802 19:12:13.304742   75193 main.go:141] libmachine: (bridge-800809) DBG | output  : 
	I0802 19:12:11.915698   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:12.414949   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:12.915014   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:13.415620   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:13.914868   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:14.414712   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:14.915354   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:15.415602   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:15.914853   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:16.414800   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:16.305514   75193 main.go:141] libmachine: (bridge-800809) DBG | Getting to WaitForSSH function...
	I0802 19:12:16.307869   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.308383   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.308414   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.308598   75193 main.go:141] libmachine: (bridge-800809) DBG | Using SSH client type: external
	I0802 19:12:16.308620   75193 main.go:141] libmachine: (bridge-800809) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa (-rw-------)
	I0802 19:12:16.308637   75193 main.go:141] libmachine: (bridge-800809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 19:12:16.308646   75193 main.go:141] libmachine: (bridge-800809) DBG | About to run SSH command:
	I0802 19:12:16.308655   75193 main.go:141] libmachine: (bridge-800809) DBG | exit 0
	I0802 19:12:16.435356   75193 main.go:141] libmachine: (bridge-800809) DBG | SSH cmd err, output: <nil>: 
	I0802 19:12:16.435666   75193 main.go:141] libmachine: (bridge-800809) KVM machine creation complete!
	I0802 19:12:16.435995   75193 main.go:141] libmachine: (bridge-800809) Calling .GetConfigRaw
	I0802 19:12:16.436646   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:16.436874   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:16.437042   75193 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 19:12:16.437057   75193 main.go:141] libmachine: (bridge-800809) Calling .GetState
	I0802 19:12:16.438304   75193 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 19:12:16.438316   75193 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 19:12:16.438322   75193 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 19:12:16.438327   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:16.440520   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.440917   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.440943   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.441090   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:16.441261   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.441449   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.441604   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:16.441774   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:16.442011   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:16.442024   75193 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 19:12:16.546317   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:12:16.546340   75193 main.go:141] libmachine: Detecting the provisioner...
	I0802 19:12:16.546347   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:16.549170   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.549518   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.549564   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.549767   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:16.549957   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.550117   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.550253   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:16.550426   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:16.550596   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:16.550606   75193 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 19:12:16.659539   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 19:12:16.659604   75193 main.go:141] libmachine: found compatible host: buildroot
	I0802 19:12:16.659611   75193 main.go:141] libmachine: Provisioning with buildroot...
	I0802 19:12:16.659618   75193 main.go:141] libmachine: (bridge-800809) Calling .GetMachineName
	I0802 19:12:16.659898   75193 buildroot.go:166] provisioning hostname "bridge-800809"
	I0802 19:12:16.659930   75193 main.go:141] libmachine: (bridge-800809) Calling .GetMachineName
	I0802 19:12:16.660113   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:16.662842   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.663206   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.663238   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.663434   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:16.663640   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.663783   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.663943   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:16.664095   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:16.664253   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:16.664274   75193 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-800809 && echo "bridge-800809" | sudo tee /etc/hostname
	I0802 19:12:16.785036   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-800809
	
	I0802 19:12:16.785066   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:16.788091   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.788514   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.788544   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.788728   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:16.788906   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.789098   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.789256   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:16.789452   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:16.789636   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:16.789654   75193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-800809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-800809/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-800809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 19:12:16.903884   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:12:16.903921   75193 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 19:12:16.903944   75193 buildroot.go:174] setting up certificates
	I0802 19:12:16.903955   75193 provision.go:84] configureAuth start
	I0802 19:12:16.903966   75193 main.go:141] libmachine: (bridge-800809) Calling .GetMachineName
	I0802 19:12:16.904256   75193 main.go:141] libmachine: (bridge-800809) Calling .GetIP
	I0802 19:12:16.907334   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.907737   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.907772   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.907974   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:16.910682   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.911137   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.911174   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.911330   75193 provision.go:143] copyHostCerts
	I0802 19:12:16.911400   75193 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 19:12:16.911413   75193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 19:12:16.911477   75193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 19:12:16.911604   75193 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 19:12:16.911615   75193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 19:12:16.911656   75193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 19:12:16.911745   75193 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 19:12:16.911754   75193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 19:12:16.911792   75193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 19:12:16.911872   75193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.bridge-800809 san=[127.0.0.1 192.168.39.217 bridge-800809 localhost minikube]
	I0802 19:12:17.133295   75193 provision.go:177] copyRemoteCerts
	I0802 19:12:17.133359   75193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 19:12:17.133389   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.136348   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.136748   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.136780   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.136932   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.137156   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.137325   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.137501   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:17.225231   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 19:12:17.250298   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0802 19:12:17.273378   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 19:12:17.300993   75193 provision.go:87] duration metric: took 397.025492ms to configureAuth
	I0802 19:12:17.301027   75193 buildroot.go:189] setting minikube options for container-runtime
	I0802 19:12:17.301190   75193 config.go:182] Loaded profile config "bridge-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:12:17.301282   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.304190   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.304596   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.304630   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.304797   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.305007   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.305184   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.305403   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.305587   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:17.307455   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:17.307485   75193 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 19:12:17.584831   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 19:12:17.584860   75193 main.go:141] libmachine: Checking connection to Docker...
	I0802 19:12:17.584868   75193 main.go:141] libmachine: (bridge-800809) Calling .GetURL
	I0802 19:12:17.586284   75193 main.go:141] libmachine: (bridge-800809) DBG | Using libvirt version 6000000
	I0802 19:12:17.588701   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.589051   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.589089   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.589200   75193 main.go:141] libmachine: Docker is up and running!
	I0802 19:12:17.589215   75193 main.go:141] libmachine: Reticulating splines...
	I0802 19:12:17.589224   75193 client.go:171] duration metric: took 27.150963663s to LocalClient.Create
	I0802 19:12:17.589266   75193 start.go:167] duration metric: took 27.151030945s to libmachine.API.Create "bridge-800809"
	I0802 19:12:17.589278   75193 start.go:293] postStartSetup for "bridge-800809" (driver="kvm2")
	I0802 19:12:17.589297   75193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 19:12:17.589323   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:17.589584   75193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 19:12:17.589610   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.591564   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.591969   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.591993   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.592149   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.592358   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.592539   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.592725   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:17.678103   75193 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 19:12:17.682550   75193 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 19:12:17.682577   75193 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 19:12:17.682667   75193 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 19:12:17.682768   75193 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 19:12:17.682880   75193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 19:12:17.692311   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:12:17.714147   75193 start.go:296] duration metric: took 124.853654ms for postStartSetup
	I0802 19:12:17.714196   75193 main.go:141] libmachine: (bridge-800809) Calling .GetConfigRaw
	I0802 19:12:17.714810   75193 main.go:141] libmachine: (bridge-800809) Calling .GetIP
	I0802 19:12:17.717241   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.717593   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.717622   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.717877   75193 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/config.json ...
	I0802 19:12:17.718053   75193 start.go:128] duration metric: took 27.302084914s to createHost
	I0802 19:12:17.718079   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.720606   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.720996   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.721022   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.721184   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.721372   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.721539   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.721710   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.721882   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:17.722040   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:17.722051   75193 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 19:12:17.827523   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722625937.802029901
	
	I0802 19:12:17.827566   75193 fix.go:216] guest clock: 1722625937.802029901
	I0802 19:12:17.827575   75193 fix.go:229] Guest: 2024-08-02 19:12:17.802029901 +0000 UTC Remote: 2024-08-02 19:12:17.718066905 +0000 UTC m=+28.764503981 (delta=83.962996ms)
	I0802 19:12:17.827630   75193 fix.go:200] guest clock delta is within tolerance: 83.962996ms
	I0802 19:12:17.827640   75193 start.go:83] releasing machines lock for "bridge-800809", held for 27.411831635s
	I0802 19:12:17.827669   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:17.828080   75193 main.go:141] libmachine: (bridge-800809) Calling .GetIP
	I0802 19:12:17.830829   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.831385   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.831422   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.831590   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:17.832056   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:17.832267   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:17.832363   75193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 19:12:17.832420   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.832715   75193 ssh_runner.go:195] Run: cat /version.json
	I0802 19:12:17.832741   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.835248   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.835900   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.836297   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.836334   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.836409   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.836626   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.836633   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.836654   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.836829   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.836847   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.837039   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.837047   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:17.837159   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.837299   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:17.952043   75193 ssh_runner.go:195] Run: systemctl --version
	I0802 19:12:17.959767   75193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 19:12:18.124236   75193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 19:12:18.129989   75193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 19:12:18.130076   75193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 19:12:18.145749   75193 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 19:12:18.145772   75193 start.go:495] detecting cgroup driver to use...
	I0802 19:12:18.145853   75193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 19:12:18.162511   75193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 19:12:18.177223   75193 docker.go:217] disabling cri-docker service (if available) ...
	I0802 19:12:18.177292   75193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 19:12:18.191444   75193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 19:12:18.206118   75193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 19:12:18.327654   75193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 19:12:18.511869   75193 docker.go:233] disabling docker service ...
	I0802 19:12:18.511942   75193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 19:12:18.526449   75193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 19:12:18.540362   75193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 19:12:18.661472   75193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 19:12:18.794787   75193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 19:12:18.810299   75193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 19:12:18.828416   75193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 19:12:18.828469   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.838447   75193 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 19:12:18.838515   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.848019   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.857431   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.866948   75193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 19:12:18.877032   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.887280   75193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.904012   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.915318   75193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 19:12:18.927838   75193 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 19:12:18.927904   75193 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 19:12:18.945657   75193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 19:12:18.956854   75193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:12:19.069362   75193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 19:12:19.205717   75193 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 19:12:19.205778   75193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 19:12:19.210506   75193 start.go:563] Will wait 60s for crictl version
	I0802 19:12:19.210555   75193 ssh_runner.go:195] Run: which crictl
	I0802 19:12:19.214101   75193 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 19:12:19.260705   75193 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 19:12:19.260794   75193 ssh_runner.go:195] Run: crio --version
	I0802 19:12:19.287812   75193 ssh_runner.go:195] Run: crio --version
	I0802 19:12:19.318772   75193 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 19:12:16.915733   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:17.415507   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:17.915631   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:18.415384   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:18.914836   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:19.415786   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:19.915584   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:20.415599   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:20.914685   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:21.415483   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:21.915455   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:22.414929   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:22.657360   73373 kubeadm.go:1113] duration metric: took 12.935137345s to wait for elevateKubeSystemPrivileges
	I0802 19:12:22.657394   73373 kubeadm.go:394] duration metric: took 24.832258811s to StartCluster
	I0802 19:12:22.657415   73373 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:22.657487   73373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:12:22.659358   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:22.659614   73373 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:12:22.659734   73373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 19:12:22.659787   73373 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 19:12:22.659888   73373 addons.go:69] Setting storage-provisioner=true in profile "flannel-800809"
	I0802 19:12:22.659902   73373 addons.go:69] Setting default-storageclass=true in profile "flannel-800809"
	I0802 19:12:22.659929   73373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-800809"
	I0802 19:12:22.659930   73373 addons.go:234] Setting addon storage-provisioner=true in "flannel-800809"
	I0802 19:12:22.659958   73373 config.go:182] Loaded profile config "flannel-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:12:22.659981   73373 host.go:66] Checking if "flannel-800809" exists ...
	I0802 19:12:22.660406   73373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:22.660437   73373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:22.660464   73373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:22.660500   73373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:22.661335   73373 out.go:177] * Verifying Kubernetes components...
	I0802 19:12:22.662788   73373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:12:22.679160   73373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35779
	I0802 19:12:22.679828   73373 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:22.680226   73373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I0802 19:12:22.680562   73373 main.go:141] libmachine: Using API Version  1
	I0802 19:12:22.680590   73373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:22.681028   73373 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:22.681319   73373 main.go:141] libmachine: (flannel-800809) Calling .GetState
	I0802 19:12:22.681353   73373 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:22.681817   73373 main.go:141] libmachine: Using API Version  1
	I0802 19:12:22.681840   73373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:22.682155   73373 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:22.682614   73373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:22.682646   73373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:22.684794   73373 addons.go:234] Setting addon default-storageclass=true in "flannel-800809"
	I0802 19:12:22.684833   73373 host.go:66] Checking if "flannel-800809" exists ...
	I0802 19:12:22.685174   73373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:22.685199   73373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:22.701476   73373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I0802 19:12:22.702225   73373 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:22.702739   73373 main.go:141] libmachine: Using API Version  1
	I0802 19:12:22.702764   73373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:22.703145   73373 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:22.703353   73373 main.go:141] libmachine: (flannel-800809) Calling .GetState
	I0802 19:12:22.703795   73373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0802 19:12:22.704634   73373 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:22.705254   73373 main.go:141] libmachine: Using API Version  1
	I0802 19:12:22.705277   73373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:22.705335   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:12:22.705968   73373 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:22.706578   73373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:22.706632   73373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:22.709042   73373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 19:12:22.710304   73373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:12:22.710321   73373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 19:12:22.710339   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:12:22.713456   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:12:22.713924   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:12:22.713939   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:12:22.714069   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:12:22.714296   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:12:22.714404   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:12:22.714482   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:12:22.727883   73373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0802 19:12:22.728338   73373 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:22.728890   73373 main.go:141] libmachine: Using API Version  1
	I0802 19:12:22.728913   73373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:22.729290   73373 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:22.729466   73373 main.go:141] libmachine: (flannel-800809) Calling .GetState
	I0802 19:12:22.730947   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:12:22.731190   73373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 19:12:22.731205   73373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 19:12:22.731220   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:12:22.734094   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:12:22.734452   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:12:22.734507   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:12:22.734630   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:12:22.734755   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:12:22.734910   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:12:22.735022   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:12:22.956405   73373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:12:23.038900   73373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:12:23.038975   73373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 19:12:23.158487   73373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 19:12:23.607626   73373 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0802 19:12:23.607693   73373 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:23.607710   73373 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:23.607727   73373 main.go:141] libmachine: (flannel-800809) Calling .Close
	I0802 19:12:23.607712   73373 main.go:141] libmachine: (flannel-800809) Calling .Close
	I0802 19:12:23.609620   73373 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:23.609641   73373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:23.609651   73373 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:23.609660   73373 main.go:141] libmachine: (flannel-800809) Calling .Close
	I0802 19:12:23.609776   73373 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:23.609784   73373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:23.609793   73373 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:23.609808   73373 main.go:141] libmachine: (flannel-800809) Calling .Close
	I0802 19:12:23.610229   73373 main.go:141] libmachine: (flannel-800809) DBG | Closing plugin on server side
	I0802 19:12:23.610324   73373 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:23.610364   73373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:23.610801   73373 node_ready.go:35] waiting up to 15m0s for node "flannel-800809" to be "Ready" ...
	I0802 19:12:23.611708   73373 main.go:141] libmachine: (flannel-800809) DBG | Closing plugin on server side
	I0802 19:12:23.611750   73373 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:23.611759   73373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:23.647249   73373 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:23.647276   73373 main.go:141] libmachine: (flannel-800809) Calling .Close
	I0802 19:12:23.647584   73373 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:23.647636   73373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:23.647592   73373 main.go:141] libmachine: (flannel-800809) DBG | Closing plugin on server side
	I0802 19:12:23.649141   73373 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0802 19:12:19.320079   75193 main.go:141] libmachine: (bridge-800809) Calling .GetIP
	I0802 19:12:19.322960   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:19.323336   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:19.323362   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:19.323687   75193 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 19:12:19.327704   75193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:12:19.339464   75193 kubeadm.go:883] updating cluster {Name:bridge-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:bridge-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 19:12:19.339558   75193 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:12:19.339597   75193 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:12:19.370398   75193 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 19:12:19.370458   75193 ssh_runner.go:195] Run: which lz4
	I0802 19:12:19.374571   75193 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 19:12:19.378526   75193 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 19:12:19.378559   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 19:12:20.699514   75193 crio.go:462] duration metric: took 1.324985414s to copy over tarball
	I0802 19:12:20.699596   75193 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 19:12:23.104591   75193 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.404951919s)
	I0802 19:12:23.104630   75193 crio.go:469] duration metric: took 2.405084044s to extract the tarball
	I0802 19:12:23.104640   75193 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 19:12:23.159765   75193 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:12:23.205579   75193 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 19:12:23.205607   75193 cache_images.go:84] Images are preloaded, skipping loading
	I0802 19:12:23.205619   75193 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.30.3 crio true true} ...
	I0802 19:12:23.205755   75193 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-800809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:bridge-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0802 19:12:23.205839   75193 ssh_runner.go:195] Run: crio config
	I0802 19:12:23.263041   75193 cni.go:84] Creating CNI manager for "bridge"
	I0802 19:12:23.263086   75193 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 19:12:23.263149   75193 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-800809 NodeName:bridge-800809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 19:12:23.263305   75193 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-800809"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 19:12:23.263379   75193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 19:12:23.274695   75193 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 19:12:23.274783   75193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 19:12:23.285056   75193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0802 19:12:23.302871   75193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 19:12:23.319992   75193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0802 19:12:23.337423   75193 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0802 19:12:23.341882   75193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:12:23.357533   75193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:12:23.479926   75193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:12:23.497646   75193 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809 for IP: 192.168.39.217
	I0802 19:12:23.497669   75193 certs.go:194] generating shared ca certs ...
	I0802 19:12:23.497687   75193 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:23.497850   75193 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 19:12:23.497908   75193 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 19:12:23.497920   75193 certs.go:256] generating profile certs ...
	I0802 19:12:23.497982   75193 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.key
	I0802 19:12:23.497998   75193 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt with IP's: []
	I0802 19:12:23.780494   75193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt ...
	I0802 19:12:23.780523   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: {Name:mk6d79385d84cde35ba63f1e39377134c97a4668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:23.780701   75193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.key ...
	I0802 19:12:23.780716   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.key: {Name:mk91ff7c20a12742080c4c3b28589065298bf144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:23.780818   75193 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key.ce9166f0
	I0802 19:12:23.780838   75193 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt.ce9166f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217]
	I0802 19:12:23.861010   75193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt.ce9166f0 ...
	I0802 19:12:23.861045   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt.ce9166f0: {Name:mk2bf665a2b367ab259bc638243a7580794de0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:23.861227   75193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key.ce9166f0 ...
	I0802 19:12:23.861244   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key.ce9166f0: {Name:mk6f19721002b2c31a6225e914ebc265bd9ee3a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:23.861341   75193 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt.ce9166f0 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt
	I0802 19:12:23.861441   75193 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key.ce9166f0 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key
	I0802 19:12:23.861499   75193 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.key
	I0802 19:12:23.861513   75193 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.crt with IP's: []
	I0802 19:12:24.015265   75193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.crt ...
	I0802 19:12:24.015298   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.crt: {Name:mk96c0d262fb9f2102d4e8c5405f62e005866bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:24.015461   75193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.key ...
	I0802 19:12:24.015474   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.key: {Name:mk00783a7287142eebca5616c32bf367e13f943c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:24.015635   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 19:12:24.015667   75193 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 19:12:24.015674   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 19:12:24.015697   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 19:12:24.015720   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 19:12:24.015738   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 19:12:24.015777   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:12:24.016358   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 19:12:24.041946   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 19:12:24.064928   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 19:12:24.086890   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 19:12:24.110024   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0802 19:12:24.132631   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 19:12:24.159174   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 19:12:24.187622   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 19:12:24.211947   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 19:12:24.237779   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 19:12:24.260948   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 19:12:24.284197   75193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 19:12:24.299222   75193 ssh_runner.go:195] Run: openssl version
	I0802 19:12:24.304693   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 19:12:24.315256   75193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:12:24.319665   75193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:12:24.319732   75193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:12:24.325474   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 19:12:24.335272   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 19:12:24.345445   75193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 19:12:24.349628   75193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 19:12:24.349690   75193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 19:12:24.355330   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 19:12:24.365634   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 19:12:24.375806   75193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 19:12:24.380025   75193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 19:12:24.380077   75193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 19:12:24.385600   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 19:12:24.395153   75193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 19:12:24.398785   75193 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 19:12:24.398851   75193 kubeadm.go:392] StartCluster: {Name:bridge-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:bridge-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:12:24.398920   75193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 19:12:24.398974   75193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 19:12:24.437132   75193 cri.go:89] found id: ""
	I0802 19:12:24.437207   75193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 19:12:24.446770   75193 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 19:12:24.455740   75193 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 19:12:24.465253   75193 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 19:12:24.465270   75193 kubeadm.go:157] found existing configuration files:
	
	I0802 19:12:24.465324   75193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 19:12:24.474207   75193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 19:12:24.474261   75193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 19:12:24.483190   75193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 19:12:24.491641   75193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 19:12:24.491701   75193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 19:12:24.500197   75193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 19:12:24.508996   75193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 19:12:24.509063   75193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 19:12:24.518056   75193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 19:12:24.526422   75193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 19:12:24.526467   75193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 19:12:24.536722   75193 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 19:12:24.589482   75193 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 19:12:24.589560   75193 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 19:12:24.706338   75193 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 19:12:24.706440   75193 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 19:12:24.706549   75193 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 19:12:24.900259   75193 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 19:12:23.650211   73373 addons.go:510] duration metric: took 990.432446ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0802 19:12:24.114834   73373 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-800809" context rescaled to 1 replicas
	I0802 19:12:25.615165   73373 node_ready.go:53] node "flannel-800809" has status "Ready":"False"
	I0802 19:12:25.004054   75193 out.go:204]   - Generating certificates and keys ...
	I0802 19:12:25.004204   75193 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 19:12:25.004316   75193 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 19:12:25.175526   75193 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 19:12:25.561760   75193 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 19:12:26.054499   75193 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 19:12:26.358987   75193 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 19:12:26.705022   75193 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 19:12:26.705240   75193 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-800809 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0802 19:12:26.848800   75193 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 19:12:26.849134   75193 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-800809 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0802 19:12:27.154661   75193 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 19:12:27.210136   75193 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 19:12:27.334288   75193 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 19:12:27.334518   75193 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 19:12:27.418951   75193 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 19:12:27.526840   75193 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 19:12:27.784216   75193 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 19:12:27.904018   75193 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 19:12:28.068523   75193 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 19:12:28.069038   75193 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 19:12:28.072981   75193 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 19:12:28.074855   75193 out.go:204]   - Booting up control plane ...
	I0802 19:12:28.074950   75193 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 19:12:28.075038   75193 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 19:12:28.075132   75193 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 19:12:28.093122   75193 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 19:12:28.093204   75193 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 19:12:28.093237   75193 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 19:12:28.228105   75193 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 19:12:28.228228   75193 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 19:12:28.729232   75193 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.610248ms
	I0802 19:12:28.729350   75193 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 19:12:28.115252   73373 node_ready.go:53] node "flannel-800809" has status "Ready":"False"
	I0802 19:12:30.614836   73373 node_ready.go:53] node "flannel-800809" has status "Ready":"False"
	I0802 19:12:33.727684   75193 kubeadm.go:310] [api-check] The API server is healthy after 5.001501231s
	I0802 19:12:33.745104   75193 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 19:12:33.763143   75193 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 19:12:33.785887   75193 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 19:12:33.786077   75193 kubeadm.go:310] [mark-control-plane] Marking the node bridge-800809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 19:12:33.798343   75193 kubeadm.go:310] [bootstrap-token] Using token: bqf6gf.15yboeq8gzijnqor
	I0802 19:12:33.799779   75193 out.go:204]   - Configuring RBAC rules ...
	I0802 19:12:33.799941   75193 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 19:12:33.805173   75193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 19:12:33.812998   75193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 19:12:33.980353   75193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 19:12:33.988325   75193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 19:12:33.996647   75193 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 19:12:34.135857   75193 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 19:12:34.562638   75193 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 19:12:35.135171   75193 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 19:12:35.136152   75193 kubeadm.go:310] 
	I0802 19:12:35.136263   75193 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 19:12:35.136285   75193 kubeadm.go:310] 
	I0802 19:12:35.136399   75193 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 19:12:35.136408   75193 kubeadm.go:310] 
	I0802 19:12:35.136462   75193 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 19:12:35.136520   75193 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 19:12:35.136563   75193 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 19:12:35.136570   75193 kubeadm.go:310] 
	I0802 19:12:35.136615   75193 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 19:12:35.136622   75193 kubeadm.go:310] 
	I0802 19:12:35.136672   75193 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 19:12:35.136678   75193 kubeadm.go:310] 
	I0802 19:12:35.136726   75193 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 19:12:35.136847   75193 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 19:12:35.136958   75193 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 19:12:35.136974   75193 kubeadm.go:310] 
	I0802 19:12:35.137078   75193 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 19:12:35.137174   75193 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 19:12:35.137184   75193 kubeadm.go:310] 
	I0802 19:12:35.137299   75193 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bqf6gf.15yboeq8gzijnqor \
	I0802 19:12:35.137444   75193 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 19:12:35.137472   75193 kubeadm.go:310] 	--control-plane 
	I0802 19:12:35.137481   75193 kubeadm.go:310] 
	I0802 19:12:35.137588   75193 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 19:12:35.137597   75193 kubeadm.go:310] 
	I0802 19:12:35.137698   75193 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bqf6gf.15yboeq8gzijnqor \
	I0802 19:12:35.137853   75193 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 19:12:35.138018   75193 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 19:12:35.138032   75193 cni.go:84] Creating CNI manager for "bridge"
	I0802 19:12:35.139873   75193 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 19:12:32.614484   73373 node_ready.go:49] node "flannel-800809" has status "Ready":"True"
	I0802 19:12:32.614509   73373 node_ready.go:38] duration metric: took 9.003663366s for node "flannel-800809" to be "Ready" ...
	I0802 19:12:32.614517   73373 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:12:32.621438   73373 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:34.627722   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:36.628390   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:35.141156   75193 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 19:12:35.151585   75193 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 19:12:35.171783   75193 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 19:12:35.171864   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:35.171871   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-800809 minikube.k8s.io/updated_at=2024_08_02T19_12_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=bridge-800809 minikube.k8s.io/primary=true
	I0802 19:12:35.208419   75193 ops.go:34] apiserver oom_adj: -16
	I0802 19:12:35.280187   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:35.781129   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:36.280605   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:36.780608   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:37.281052   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:37.780366   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:38.281227   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:38.780432   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:39.127537   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:41.628305   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:39.280972   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:39.781229   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:40.280979   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:40.781125   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:41.281094   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:41.780907   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:42.280287   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:42.781055   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:43.280412   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:43.780602   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:44.127986   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:46.128398   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:44.281222   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:44.780914   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:45.280901   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:45.781019   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:46.281000   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:46.780354   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:47.280417   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:47.780433   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:48.281087   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:48.387665   75193 kubeadm.go:1113] duration metric: took 13.21586494s to wait for elevateKubeSystemPrivileges
	I0802 19:12:48.387702   75193 kubeadm.go:394] duration metric: took 23.988862741s to StartCluster
	I0802 19:12:48.387722   75193 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:48.387791   75193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:12:48.389325   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:48.389558   75193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 19:12:48.389586   75193 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:12:48.389650   75193 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 19:12:48.389724   75193 addons.go:69] Setting storage-provisioner=true in profile "bridge-800809"
	I0802 19:12:48.389768   75193 addons.go:234] Setting addon storage-provisioner=true in "bridge-800809"
	I0802 19:12:48.389769   75193 addons.go:69] Setting default-storageclass=true in profile "bridge-800809"
	I0802 19:12:48.389807   75193 host.go:66] Checking if "bridge-800809" exists ...
	I0802 19:12:48.389825   75193 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-800809"
	I0802 19:12:48.389807   75193 config.go:182] Loaded profile config "bridge-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:12:48.390324   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:48.390356   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:48.390362   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:48.390374   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:48.391209   75193 out.go:177] * Verifying Kubernetes components...
	I0802 19:12:48.392578   75193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:12:48.405943   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0802 19:12:48.406396   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:48.406955   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:12:48.406982   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:48.407364   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:48.407854   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:48.407880   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:48.410533   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33405
	I0802 19:12:48.411091   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:48.411569   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:12:48.411594   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:48.411906   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:48.412083   75193 main.go:141] libmachine: (bridge-800809) Calling .GetState
	I0802 19:12:48.415357   75193 addons.go:234] Setting addon default-storageclass=true in "bridge-800809"
	I0802 19:12:48.415391   75193 host.go:66] Checking if "bridge-800809" exists ...
	I0802 19:12:48.417917   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:48.417950   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:48.426395   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0802 19:12:48.426906   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:48.427498   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:12:48.427523   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:48.427904   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:48.428486   75193 main.go:141] libmachine: (bridge-800809) Calling .GetState
	I0802 19:12:48.430436   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:48.432680   75193 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 19:12:48.434102   75193 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:12:48.434122   75193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 19:12:48.434141   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:48.434413   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37019
	I0802 19:12:48.435250   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:48.436014   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:12:48.436047   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:48.436449   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:48.437207   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:48.437242   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:48.437605   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:48.438168   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:48.438194   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:48.438433   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:48.438636   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:48.438838   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:48.439011   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:48.453919   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0802 19:12:48.454371   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:48.454914   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:12:48.454929   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:48.455326   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:48.455550   75193 main.go:141] libmachine: (bridge-800809) Calling .GetState
	I0802 19:12:48.457279   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:48.457525   75193 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 19:12:48.457540   75193 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 19:12:48.457554   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:48.460482   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:48.460785   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:48.460983   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:48.461054   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:48.461263   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:48.461419   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:48.461612   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:48.623544   75193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:12:48.623611   75193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 19:12:48.760972   75193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 19:12:48.764480   75193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:12:49.211975   75193 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0802 19:12:49.212074   75193 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:49.212094   75193 main.go:141] libmachine: (bridge-800809) Calling .Close
	I0802 19:12:49.212377   75193 main.go:141] libmachine: (bridge-800809) DBG | Closing plugin on server side
	I0802 19:12:49.212400   75193 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:49.212412   75193 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:49.212427   75193 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:49.212436   75193 main.go:141] libmachine: (bridge-800809) Calling .Close
	I0802 19:12:49.212712   75193 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:49.212747   75193 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:49.213652   75193 node_ready.go:35] waiting up to 15m0s for node "bridge-800809" to be "Ready" ...
	I0802 19:12:49.234253   75193 node_ready.go:49] node "bridge-800809" has status "Ready":"True"
	I0802 19:12:49.234273   75193 node_ready.go:38] duration metric: took 20.595503ms for node "bridge-800809" to be "Ready" ...
	I0802 19:12:49.234286   75193 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:12:49.247634   75193 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:49.247664   75193 main.go:141] libmachine: (bridge-800809) Calling .Close
	I0802 19:12:49.247904   75193 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:49.247964   75193 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:49.247991   75193 main.go:141] libmachine: (bridge-800809) DBG | Closing plugin on server side
	I0802 19:12:49.251549   75193 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.727647   75193 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:49.727674   75193 main.go:141] libmachine: (bridge-800809) Calling .Close
	I0802 19:12:49.728001   75193 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:49.728021   75193 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:49.728032   75193 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:49.728040   75193 main.go:141] libmachine: (bridge-800809) Calling .Close
	I0802 19:12:49.728272   75193 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:49.728404   75193 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:49.728305   75193 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-800809" context rescaled to 1 replicas
	I0802 19:12:49.728335   75193 main.go:141] libmachine: (bridge-800809) DBG | Closing plugin on server side
	I0802 19:12:49.729853   75193 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0802 19:12:48.128861   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:49.131931   73373 pod_ready.go:92] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.131960   73373 pod_ready.go:81] duration metric: took 16.510491582s for pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.131972   73373 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.138318   73373 pod_ready.go:92] pod "etcd-flannel-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.138336   73373 pod_ready.go:81] duration metric: took 6.35742ms for pod "etcd-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.138345   73373 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.143670   73373 pod_ready.go:92] pod "kube-apiserver-flannel-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.143699   73373 pod_ready.go:81] duration metric: took 5.34653ms for pod "kube-apiserver-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.143715   73373 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.147641   73373 pod_ready.go:92] pod "kube-controller-manager-flannel-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.147661   73373 pod_ready.go:81] duration metric: took 3.938378ms for pod "kube-controller-manager-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.147673   73373 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-tnw7q" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.151857   73373 pod_ready.go:92] pod "kube-proxy-tnw7q" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.151881   73373 pod_ready.go:81] duration metric: took 4.200828ms for pod "kube-proxy-tnw7q" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.151892   73373 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.526195   73373 pod_ready.go:92] pod "kube-scheduler-flannel-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.526217   73373 pod_ready.go:81] duration metric: took 374.318187ms for pod "kube-scheduler-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.526226   73373 pod_ready.go:38] duration metric: took 16.911698171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:12:49.526240   73373 api_server.go:52] waiting for apiserver process to appear ...
	I0802 19:12:49.526284   73373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:12:49.541366   73373 api_server.go:72] duration metric: took 26.881711409s to wait for apiserver process to appear ...
	I0802 19:12:49.541391   73373 api_server.go:88] waiting for apiserver healthz status ...
	I0802 19:12:49.541414   73373 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I0802 19:12:49.546469   73373 api_server.go:279] https://192.168.50.5:8443/healthz returned 200:
	ok
	I0802 19:12:49.547600   73373 api_server.go:141] control plane version: v1.30.3
	I0802 19:12:49.547627   73373 api_server.go:131] duration metric: took 6.228546ms to wait for apiserver health ...
	I0802 19:12:49.547637   73373 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 19:12:49.728652   73373 system_pods.go:59] 7 kube-system pods found
	I0802 19:12:49.728685   73373 system_pods.go:61] "coredns-7db6d8ff4d-n59rs" [450a63b5-55c6-40b7-985b-f444ed2d9fba] Running
	I0802 19:12:49.728692   73373 system_pods.go:61] "etcd-flannel-800809" [929ed6c8-4ba2-4614-8bd6-a1f5bb842702] Running
	I0802 19:12:49.728697   73373 system_pods.go:61] "kube-apiserver-flannel-800809" [1424e1f7-4929-40dd-ac34-73e3b1ea59c2] Running
	I0802 19:12:49.728704   73373 system_pods.go:61] "kube-controller-manager-flannel-800809" [02d0cf02-543d-4def-89d2-8fca3a15a0cc] Running
	I0802 19:12:49.728712   73373 system_pods.go:61] "kube-proxy-tnw7q" [a653fa05-c53f-459f-b85e-93f3a80e3be5] Running
	I0802 19:12:49.728716   73373 system_pods.go:61] "kube-scheduler-flannel-800809" [94d72541-3fef-4cf6-b7ef-6e69eb8d763f] Running
	I0802 19:12:49.728723   73373 system_pods.go:61] "storage-provisioner" [9078e148-1a3b-4849-877f-d4f664235a43] Running
	I0802 19:12:49.728731   73373 system_pods.go:74] duration metric: took 181.087258ms to wait for pod list to return data ...
	I0802 19:12:49.728743   73373 default_sa.go:34] waiting for default service account to be created ...
	I0802 19:12:49.924765   73373 default_sa.go:45] found service account: "default"
	I0802 19:12:49.924790   73373 default_sa.go:55] duration metric: took 196.038261ms for default service account to be created ...
	I0802 19:12:49.924799   73373 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 19:12:50.128518   73373 system_pods.go:86] 7 kube-system pods found
	I0802 19:12:50.128551   73373 system_pods.go:89] "coredns-7db6d8ff4d-n59rs" [450a63b5-55c6-40b7-985b-f444ed2d9fba] Running
	I0802 19:12:50.128560   73373 system_pods.go:89] "etcd-flannel-800809" [929ed6c8-4ba2-4614-8bd6-a1f5bb842702] Running
	I0802 19:12:50.128566   73373 system_pods.go:89] "kube-apiserver-flannel-800809" [1424e1f7-4929-40dd-ac34-73e3b1ea59c2] Running
	I0802 19:12:50.128572   73373 system_pods.go:89] "kube-controller-manager-flannel-800809" [02d0cf02-543d-4def-89d2-8fca3a15a0cc] Running
	I0802 19:12:50.128578   73373 system_pods.go:89] "kube-proxy-tnw7q" [a653fa05-c53f-459f-b85e-93f3a80e3be5] Running
	I0802 19:12:50.128583   73373 system_pods.go:89] "kube-scheduler-flannel-800809" [94d72541-3fef-4cf6-b7ef-6e69eb8d763f] Running
	I0802 19:12:50.128593   73373 system_pods.go:89] "storage-provisioner" [9078e148-1a3b-4849-877f-d4f664235a43] Running
	I0802 19:12:50.128603   73373 system_pods.go:126] duration metric: took 203.799009ms to wait for k8s-apps to be running ...
	I0802 19:12:50.128616   73373 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 19:12:50.128668   73373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:12:50.146666   73373 system_svc.go:56] duration metric: took 18.039008ms WaitForService to wait for kubelet
	I0802 19:12:50.146702   73373 kubeadm.go:582] duration metric: took 27.487050042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:12:50.146728   73373 node_conditions.go:102] verifying NodePressure condition ...
	I0802 19:12:50.325380   73373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 19:12:50.325404   73373 node_conditions.go:123] node cpu capacity is 2
	I0802 19:12:50.325415   73373 node_conditions.go:105] duration metric: took 178.680912ms to run NodePressure ...
	I0802 19:12:50.325426   73373 start.go:241] waiting for startup goroutines ...
	I0802 19:12:50.325432   73373 start.go:246] waiting for cluster config update ...
	I0802 19:12:50.325443   73373 start.go:255] writing updated cluster config ...
	I0802 19:12:50.325714   73373 ssh_runner.go:195] Run: rm -f paused
	I0802 19:12:50.372070   73373 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 19:12:50.373849   73373 out.go:177] * Done! kubectl is now configured to use "flannel-800809" cluster and "default" namespace by default
	I0802 19:12:49.731163   75193 addons.go:510] duration metric: took 1.341512478s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0802 19:12:51.257274   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:53.257732   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:55.257866   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:57.758927   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:00.258370   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:02.756877   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:04.757945   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:07.258436   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:09.757346   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:11.757579   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:13.758039   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:16.257374   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:18.257937   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:20.258084   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:22.258126   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:24.757470   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:27.257825   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:29.257281   75193 pod_ready.go:92] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.257324   75193 pod_ready.go:81] duration metric: took 40.005751555s for pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.257337   75193 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-zrfqs" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.259258   75193 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-zrfqs" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-zrfqs" not found
	I0802 19:13:29.259286   75193 pod_ready.go:81] duration metric: took 1.937219ms for pod "coredns-7db6d8ff4d-zrfqs" in "kube-system" namespace to be "Ready" ...
	E0802 19:13:29.259297   75193 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-zrfqs" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-zrfqs" not found
	I0802 19:13:29.259303   75193 pod_ready.go:78] waiting up to 15m0s for pod "etcd-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.263672   75193 pod_ready.go:92] pod "etcd-bridge-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.263693   75193 pod_ready.go:81] duration metric: took 4.38474ms for pod "etcd-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.263703   75193 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.268684   75193 pod_ready.go:92] pod "kube-apiserver-bridge-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.268711   75193 pod_ready.go:81] duration metric: took 4.999699ms for pod "kube-apiserver-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.268725   75193 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.273639   75193 pod_ready.go:92] pod "kube-controller-manager-bridge-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.273657   75193 pod_ready.go:81] duration metric: took 4.925321ms for pod "kube-controller-manager-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.273666   75193 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-sg47p" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.456055   75193 pod_ready.go:92] pod "kube-proxy-sg47p" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.456079   75193 pod_ready.go:81] duration metric: took 182.40732ms for pod "kube-proxy-sg47p" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.456088   75193 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.856525   75193 pod_ready.go:92] pod "kube-scheduler-bridge-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.856549   75193 pod_ready.go:81] duration metric: took 400.453989ms for pod "kube-scheduler-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.856559   75193 pod_ready.go:38] duration metric: took 40.622262236s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:13:29.856575   75193 api_server.go:52] waiting for apiserver process to appear ...
	I0802 19:13:29.856622   75193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:13:29.871247   75193 api_server.go:72] duration metric: took 41.481624333s to wait for apiserver process to appear ...
	I0802 19:13:29.871280   75193 api_server.go:88] waiting for apiserver healthz status ...
	I0802 19:13:29.871303   75193 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0802 19:13:29.876697   75193 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0802 19:13:29.877685   75193 api_server.go:141] control plane version: v1.30.3
	I0802 19:13:29.877709   75193 api_server.go:131] duration metric: took 6.422138ms to wait for apiserver health ...
	I0802 19:13:29.877718   75193 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 19:13:30.059207   75193 system_pods.go:59] 7 kube-system pods found
	I0802 19:13:30.059251   75193 system_pods.go:61] "coredns-7db6d8ff4d-7v5ln" [f2d90271-99be-4660-90b0-e0d49cb8164e] Running
	I0802 19:13:30.059259   75193 system_pods.go:61] "etcd-bridge-800809" [5f7b9afb-9447-4e87-b927-ad75682d760a] Running
	I0802 19:13:30.059263   75193 system_pods.go:61] "kube-apiserver-bridge-800809" [6875e96d-bd0f-4435-b4eb-8f84f1c886df] Running
	I0802 19:13:30.059266   75193 system_pods.go:61] "kube-controller-manager-bridge-800809" [e19b2da5-715c-4306-9318-7c06ffe02503] Running
	I0802 19:13:30.059270   75193 system_pods.go:61] "kube-proxy-sg47p" [3b228ae6-c57f-46a8-837e-ebbc3249048a] Running
	I0802 19:13:30.059273   75193 system_pods.go:61] "kube-scheduler-bridge-800809" [ce6e55e7-132f-4ccf-a483-f363617e6964] Running
	I0802 19:13:30.059276   75193 system_pods.go:61] "storage-provisioner" [e235ec51-a2c4-4df0-8714-0b0268979e99] Running
	I0802 19:13:30.059282   75193 system_pods.go:74] duration metric: took 181.557889ms to wait for pod list to return data ...
	I0802 19:13:30.059289   75193 default_sa.go:34] waiting for default service account to be created ...
	I0802 19:13:30.255438   75193 default_sa.go:45] found service account: "default"
	I0802 19:13:30.255462   75193 default_sa.go:55] duration metric: took 196.167232ms for default service account to be created ...
	I0802 19:13:30.255469   75193 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 19:13:30.458752   75193 system_pods.go:86] 7 kube-system pods found
	I0802 19:13:30.458779   75193 system_pods.go:89] "coredns-7db6d8ff4d-7v5ln" [f2d90271-99be-4660-90b0-e0d49cb8164e] Running
	I0802 19:13:30.458786   75193 system_pods.go:89] "etcd-bridge-800809" [5f7b9afb-9447-4e87-b927-ad75682d760a] Running
	I0802 19:13:30.458791   75193 system_pods.go:89] "kube-apiserver-bridge-800809" [6875e96d-bd0f-4435-b4eb-8f84f1c886df] Running
	I0802 19:13:30.458795   75193 system_pods.go:89] "kube-controller-manager-bridge-800809" [e19b2da5-715c-4306-9318-7c06ffe02503] Running
	I0802 19:13:30.458799   75193 system_pods.go:89] "kube-proxy-sg47p" [3b228ae6-c57f-46a8-837e-ebbc3249048a] Running
	I0802 19:13:30.458803   75193 system_pods.go:89] "kube-scheduler-bridge-800809" [ce6e55e7-132f-4ccf-a483-f363617e6964] Running
	I0802 19:13:30.458806   75193 system_pods.go:89] "storage-provisioner" [e235ec51-a2c4-4df0-8714-0b0268979e99] Running
	I0802 19:13:30.458815   75193 system_pods.go:126] duration metric: took 203.338605ms to wait for k8s-apps to be running ...
	I0802 19:13:30.458824   75193 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 19:13:30.458883   75193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:13:30.474400   75193 system_svc.go:56] duration metric: took 15.566458ms WaitForService to wait for kubelet
	I0802 19:13:30.474437   75193 kubeadm.go:582] duration metric: took 42.084819474s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:13:30.474461   75193 node_conditions.go:102] verifying NodePressure condition ...
	I0802 19:13:30.656106   75193 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 19:13:30.656134   75193 node_conditions.go:123] node cpu capacity is 2
	I0802 19:13:30.656145   75193 node_conditions.go:105] duration metric: took 181.678537ms to run NodePressure ...
	I0802 19:13:30.656156   75193 start.go:241] waiting for startup goroutines ...
	I0802 19:13:30.656162   75193 start.go:246] waiting for cluster config update ...
	I0802 19:13:30.656171   75193 start.go:255] writing updated cluster config ...
	I0802 19:13:30.656438   75193 ssh_runner.go:195] Run: rm -f paused
	I0802 19:13:30.702455   75193 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 19:13:30.704336   75193 out.go:177] * Done! kubectl is now configured to use "bridge-800809" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.395488423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722626073395465108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4678869-860a-4e54-bb5c-59eb07d17d8c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.395921828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=416ef70b-f162-4169-8044-129f679d553b name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.395982183Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=416ef70b-f162-4169-8044-129f679d553b name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.396236637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd26613a29e0f2c874d9091019f6fdc7e5d3931e62918e9a6b02299bd15a6aa4,PodSandboxId:ae968924856f7f8ac1fce76b0ec17def939cc09d9b5aa5a6fdea5117efbc9475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722625532187295375,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3300a13-9ee5-4eeb-9e21-9ef40aad1379,},Annotations:map[string]string{io.kubernetes.container.hash: 8bdc195f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1d5601411c8a850e29d2f7f151a5a2ddf65ab801a0f1cbb421a881cc9bf2f,PodSandboxId:c43cc07a8b6a531382f2190d503ccb3d565af979300ae05e33994f464934de61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625532017402644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bm67n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97410089-9b08-4ea7-9636-ce635935858f,},Annotations:map[string]string{io.kubernetes.container.hash: 9f62d51e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c2bf68ade767841e843b9c339671d2090d2c096c0d784bd8f13d1d367b8b18,PodSandboxId:41e6c4c44f01ab6c95da23de3109c1f369c25f65a60d27f59bbbf7ee3a9d4747,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625531662344311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rfg9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
511162d-2bd2-490f-b789-925b904bd691,},Annotations:map[string]string{io.kubernetes.container.hash: f89db96e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c94d117258965a24491514da860bfeeac3202a6a361ab891faf59e6ea3ac6ab,PodSandboxId:1f757d0c569a7fce28e5e5ace66ac9228567c3ce750c74a00e42ac76d50a1879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722625531107472517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8w67s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d73c44-1601-4c2f-8399-259dbcd18813,},Annotations:map[string]string{io.kubernetes.container.hash: cd3cf495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cfa071ae25817cafbd2505e8cfca69119aecf5c2bda1137fe0f6a11b09725a3,PodSandboxId:7d52af71254c09fa83eb239f38a2d85f0b60c3bde73b6627ff3001382b3067cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722625512057524152,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f83ecf32e6a6ff7986504eae145875021a4e9599d9c6d31f135e4b64ba27e7,PodSandboxId:c3ca7a780f4698af389defbc12929295001bde7b644aa61fdc53a8a5173af302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722625512037909322,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321b35ee4ad27dd1b67cecf2104fbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2cf313b754a84629c4ddf416bfaea4b6805f308c29cfa47cad78617e76bed0,PodSandboxId:f482ad4efa6dd1153354cb61b2c39f490fb370bc4bd2061d7ea325ba7b5887b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722625512012480354,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594ee16db5c0e78927c7ad037e6e2041,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b59df5d65404eaf063c736b81c1ee9c11b1d04e05b64f39e420d188a563cc,PodSandboxId:d09c466332c2a7a93ce2632326150ed82662ffd37b6a3f64b0f8ba18776ab624,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722625511951198931,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b3185a2e6b2067998f176e6f7519a8,},Annotations:map[string]string{io.kubernetes.container.hash: 90f9c977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aa928d29603a61a99235b6879de943269bca61e4cd2b0573280d3158b18e63,PodSandboxId:7eb934b7d29e5d7a409a3dcb21eea0a0b7ac97eb107959fae2fb1481679816fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722625219318277589,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=416ef70b-f162-4169-8044-129f679d553b name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.439486814Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70475fa5-ef68-4657-b9a3-7f7bf5c76d27 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.439584045Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70475fa5-ef68-4657-b9a3-7f7bf5c76d27 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.446045520Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b6ada16-79f5-4b79-9080-1e4e97bf48ef name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.446554423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722626073446530093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b6ada16-79f5-4b79-9080-1e4e97bf48ef name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.447510021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68b7a82b-781e-482e-851c-c58eb71e7adc name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.447585073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68b7a82b-781e-482e-851c-c58eb71e7adc name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.447817345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd26613a29e0f2c874d9091019f6fdc7e5d3931e62918e9a6b02299bd15a6aa4,PodSandboxId:ae968924856f7f8ac1fce76b0ec17def939cc09d9b5aa5a6fdea5117efbc9475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722625532187295375,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3300a13-9ee5-4eeb-9e21-9ef40aad1379,},Annotations:map[string]string{io.kubernetes.container.hash: 8bdc195f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1d5601411c8a850e29d2f7f151a5a2ddf65ab801a0f1cbb421a881cc9bf2f,PodSandboxId:c43cc07a8b6a531382f2190d503ccb3d565af979300ae05e33994f464934de61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625532017402644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bm67n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97410089-9b08-4ea7-9636-ce635935858f,},Annotations:map[string]string{io.kubernetes.container.hash: 9f62d51e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c2bf68ade767841e843b9c339671d2090d2c096c0d784bd8f13d1d367b8b18,PodSandboxId:41e6c4c44f01ab6c95da23de3109c1f369c25f65a60d27f59bbbf7ee3a9d4747,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625531662344311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rfg9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
511162d-2bd2-490f-b789-925b904bd691,},Annotations:map[string]string{io.kubernetes.container.hash: f89db96e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c94d117258965a24491514da860bfeeac3202a6a361ab891faf59e6ea3ac6ab,PodSandboxId:1f757d0c569a7fce28e5e5ace66ac9228567c3ce750c74a00e42ac76d50a1879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722625531107472517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8w67s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d73c44-1601-4c2f-8399-259dbcd18813,},Annotations:map[string]string{io.kubernetes.container.hash: cd3cf495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cfa071ae25817cafbd2505e8cfca69119aecf5c2bda1137fe0f6a11b09725a3,PodSandboxId:7d52af71254c09fa83eb239f38a2d85f0b60c3bde73b6627ff3001382b3067cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722625512057524152,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f83ecf32e6a6ff7986504eae145875021a4e9599d9c6d31f135e4b64ba27e7,PodSandboxId:c3ca7a780f4698af389defbc12929295001bde7b644aa61fdc53a8a5173af302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722625512037909322,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321b35ee4ad27dd1b67cecf2104fbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2cf313b754a84629c4ddf416bfaea4b6805f308c29cfa47cad78617e76bed0,PodSandboxId:f482ad4efa6dd1153354cb61b2c39f490fb370bc4bd2061d7ea325ba7b5887b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722625512012480354,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594ee16db5c0e78927c7ad037e6e2041,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b59df5d65404eaf063c736b81c1ee9c11b1d04e05b64f39e420d188a563cc,PodSandboxId:d09c466332c2a7a93ce2632326150ed82662ffd37b6a3f64b0f8ba18776ab624,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722625511951198931,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b3185a2e6b2067998f176e6f7519a8,},Annotations:map[string]string{io.kubernetes.container.hash: 90f9c977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aa928d29603a61a99235b6879de943269bca61e4cd2b0573280d3158b18e63,PodSandboxId:7eb934b7d29e5d7a409a3dcb21eea0a0b7ac97eb107959fae2fb1481679816fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722625219318277589,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68b7a82b-781e-482e-851c-c58eb71e7adc name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.486726535Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2699378-4284-4be4-ac7d-a31df838ff12 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.486805743Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2699378-4284-4be4-ac7d-a31df838ff12 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.488412318Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da0c2395-1945-4887-898d-9218227dffce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.488896175Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722626073488806885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da0c2395-1945-4887-898d-9218227dffce name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.489448165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a1af541-494e-4859-89ca-3184ff14935c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.489502103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a1af541-494e-4859-89ca-3184ff14935c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.489710037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd26613a29e0f2c874d9091019f6fdc7e5d3931e62918e9a6b02299bd15a6aa4,PodSandboxId:ae968924856f7f8ac1fce76b0ec17def939cc09d9b5aa5a6fdea5117efbc9475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722625532187295375,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3300a13-9ee5-4eeb-9e21-9ef40aad1379,},Annotations:map[string]string{io.kubernetes.container.hash: 8bdc195f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1d5601411c8a850e29d2f7f151a5a2ddf65ab801a0f1cbb421a881cc9bf2f,PodSandboxId:c43cc07a8b6a531382f2190d503ccb3d565af979300ae05e33994f464934de61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625532017402644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bm67n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97410089-9b08-4ea7-9636-ce635935858f,},Annotations:map[string]string{io.kubernetes.container.hash: 9f62d51e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c2bf68ade767841e843b9c339671d2090d2c096c0d784bd8f13d1d367b8b18,PodSandboxId:41e6c4c44f01ab6c95da23de3109c1f369c25f65a60d27f59bbbf7ee3a9d4747,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625531662344311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rfg9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
511162d-2bd2-490f-b789-925b904bd691,},Annotations:map[string]string{io.kubernetes.container.hash: f89db96e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c94d117258965a24491514da860bfeeac3202a6a361ab891faf59e6ea3ac6ab,PodSandboxId:1f757d0c569a7fce28e5e5ace66ac9228567c3ce750c74a00e42ac76d50a1879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722625531107472517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8w67s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d73c44-1601-4c2f-8399-259dbcd18813,},Annotations:map[string]string{io.kubernetes.container.hash: cd3cf495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cfa071ae25817cafbd2505e8cfca69119aecf5c2bda1137fe0f6a11b09725a3,PodSandboxId:7d52af71254c09fa83eb239f38a2d85f0b60c3bde73b6627ff3001382b3067cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722625512057524152,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f83ecf32e6a6ff7986504eae145875021a4e9599d9c6d31f135e4b64ba27e7,PodSandboxId:c3ca7a780f4698af389defbc12929295001bde7b644aa61fdc53a8a5173af302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722625512037909322,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321b35ee4ad27dd1b67cecf2104fbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2cf313b754a84629c4ddf416bfaea4b6805f308c29cfa47cad78617e76bed0,PodSandboxId:f482ad4efa6dd1153354cb61b2c39f490fb370bc4bd2061d7ea325ba7b5887b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722625512012480354,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594ee16db5c0e78927c7ad037e6e2041,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b59df5d65404eaf063c736b81c1ee9c11b1d04e05b64f39e420d188a563cc,PodSandboxId:d09c466332c2a7a93ce2632326150ed82662ffd37b6a3f64b0f8ba18776ab624,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722625511951198931,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b3185a2e6b2067998f176e6f7519a8,},Annotations:map[string]string{io.kubernetes.container.hash: 90f9c977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aa928d29603a61a99235b6879de943269bca61e4cd2b0573280d3158b18e63,PodSandboxId:7eb934b7d29e5d7a409a3dcb21eea0a0b7ac97eb107959fae2fb1481679816fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722625219318277589,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a1af541-494e-4859-89ca-3184ff14935c name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.521004387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=81ce11cf-4d8a-4d06-93b1-b98e751d8983 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.521211012Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81ce11cf-4d8a-4d06-93b1-b98e751d8983 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.522311263Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c05d04e-241b-47fb-ba55-f09e183c4e60 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.523460456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722626073523433449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c05d04e-241b-47fb-ba55-f09e183c4e60 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.525289556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d0f151e-78bc-444b-89f0-5606fd268264 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.525363188Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d0f151e-78bc-444b-89f0-5606fd268264 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:14:33 embed-certs-757654 crio[723]: time="2024-08-02 19:14:33.525660535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd26613a29e0f2c874d9091019f6fdc7e5d3931e62918e9a6b02299bd15a6aa4,PodSandboxId:ae968924856f7f8ac1fce76b0ec17def939cc09d9b5aa5a6fdea5117efbc9475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722625532187295375,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3300a13-9ee5-4eeb-9e21-9ef40aad1379,},Annotations:map[string]string{io.kubernetes.container.hash: 8bdc195f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1d5601411c8a850e29d2f7f151a5a2ddf65ab801a0f1cbb421a881cc9bf2f,PodSandboxId:c43cc07a8b6a531382f2190d503ccb3d565af979300ae05e33994f464934de61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625532017402644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bm67n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97410089-9b08-4ea7-9636-ce635935858f,},Annotations:map[string]string{io.kubernetes.container.hash: 9f62d51e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c2bf68ade767841e843b9c339671d2090d2c096c0d784bd8f13d1d367b8b18,PodSandboxId:41e6c4c44f01ab6c95da23de3109c1f369c25f65a60d27f59bbbf7ee3a9d4747,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625531662344311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rfg9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
511162d-2bd2-490f-b789-925b904bd691,},Annotations:map[string]string{io.kubernetes.container.hash: f89db96e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c94d117258965a24491514da860bfeeac3202a6a361ab891faf59e6ea3ac6ab,PodSandboxId:1f757d0c569a7fce28e5e5ace66ac9228567c3ce750c74a00e42ac76d50a1879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722625531107472517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8w67s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d73c44-1601-4c2f-8399-259dbcd18813,},Annotations:map[string]string{io.kubernetes.container.hash: cd3cf495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cfa071ae25817cafbd2505e8cfca69119aecf5c2bda1137fe0f6a11b09725a3,PodSandboxId:7d52af71254c09fa83eb239f38a2d85f0b60c3bde73b6627ff3001382b3067cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722625512057524152,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f83ecf32e6a6ff7986504eae145875021a4e9599d9c6d31f135e4b64ba27e7,PodSandboxId:c3ca7a780f4698af389defbc12929295001bde7b644aa61fdc53a8a5173af302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722625512037909322,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321b35ee4ad27dd1b67cecf2104fbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2cf313b754a84629c4ddf416bfaea4b6805f308c29cfa47cad78617e76bed0,PodSandboxId:f482ad4efa6dd1153354cb61b2c39f490fb370bc4bd2061d7ea325ba7b5887b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722625512012480354,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594ee16db5c0e78927c7ad037e6e2041,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b59df5d65404eaf063c736b81c1ee9c11b1d04e05b64f39e420d188a563cc,PodSandboxId:d09c466332c2a7a93ce2632326150ed82662ffd37b6a3f64b0f8ba18776ab624,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722625511951198931,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b3185a2e6b2067998f176e6f7519a8,},Annotations:map[string]string{io.kubernetes.container.hash: 90f9c977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aa928d29603a61a99235b6879de943269bca61e4cd2b0573280d3158b18e63,PodSandboxId:7eb934b7d29e5d7a409a3dcb21eea0a0b7ac97eb107959fae2fb1481679816fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722625219318277589,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d0f151e-78bc-444b-89f0-5606fd268264 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cd26613a29e0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   ae968924856f7       storage-provisioner
	e3a1d5601411c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   c43cc07a8b6a5       coredns-7db6d8ff4d-bm67n
	99c2bf68ade76       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   41e6c4c44f01a       coredns-7db6d8ff4d-rfg9v
	1c94d11725896       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   1f757d0c569a7       kube-proxy-8w67s
	8cfa071ae2581       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   7d52af71254c0       kube-apiserver-embed-certs-757654
	85f83ecf32e6a       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   c3ca7a780f469       kube-scheduler-embed-certs-757654
	6f2cf313b754a       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   f482ad4efa6dd       kube-controller-manager-embed-certs-757654
	215b59df5d654       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   d09c466332c2a       etcd-embed-certs-757654
	b7aa928d29603       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   14 minutes ago      Exited              kube-apiserver            1                   7eb934b7d29e5       kube-apiserver-embed-certs-757654
	
	
	==> coredns [99c2bf68ade767841e843b9c339671d2090d2c096c0d784bd8f13d1d367b8b18] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e3a1d5601411c8a850e29d2f7f151a5a2ddf65ab801a0f1cbb421a881cc9bf2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-757654
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-757654
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=embed-certs-757654
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T19_05_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 19:05:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-757654
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 19:14:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 19:10:45 +0000   Fri, 02 Aug 2024 19:05:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 19:10:45 +0000   Fri, 02 Aug 2024 19:05:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 19:10:45 +0000   Fri, 02 Aug 2024 19:05:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 19:10:45 +0000   Fri, 02 Aug 2024 19:05:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.74
	  Hostname:    embed-certs-757654
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ffd2022e12cc44c49e899fab8e76d6ac
	  System UUID:                ffd2022e-12cc-44c4-9e89-9fab8e76d6ac
	  Boot ID:                    537e9d85-e3aa-4e14-8a47-e5da258ba33d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-bm67n                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m3s
	  kube-system                 coredns-7db6d8ff4d-rfg9v                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m3s
	  kube-system                 etcd-embed-certs-757654                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-apiserver-embed-certs-757654             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-embed-certs-757654    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-proxy-8w67s                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	  kube-system                 kube-scheduler-embed-certs-757654             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-569cc877fc-d69sk               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m2s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m1s                   kube-proxy       
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node embed-certs-757654 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node embed-certs-757654 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node embed-certs-757654 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s                  kubelet          Node embed-certs-757654 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s                  kubelet          Node embed-certs-757654 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s                  kubelet          Node embed-certs-757654 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m4s                   node-controller  Node embed-certs-757654 event: Registered Node embed-certs-757654 in Controller
	
	
	==> dmesg <==
	[  +0.052063] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037497] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.751175] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.868325] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Aug 2 19:00] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.890168] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.061366] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060401] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.193601] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.123048] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.270464] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.013981] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +2.084419] systemd-fstab-generator[926]: Ignoring "noauto" option for root device
	[  +0.071000] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.530882] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.495351] kauditd_printk_skb: 79 callbacks suppressed
	[Aug 2 19:05] systemd-fstab-generator[3600]: Ignoring "noauto" option for root device
	[  +0.070800] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.486307] systemd-fstab-generator[3925]: Ignoring "noauto" option for root device
	[  +0.079417] kauditd_printk_skb: 54 callbacks suppressed
	[ +13.749009] systemd-fstab-generator[4118]: Ignoring "noauto" option for root device
	[  +0.083645] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 2 19:06] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [215b59df5d65404eaf063c736b81c1ee9c11b1d04e05b64f39e420d188a563cc] <==
	{"level":"warn","ts":"2024-08-02T19:09:51.871625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.749932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T19:09:51.871655Z","caller":"traceutil/trace.go:171","msg":"trace[157763035] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:702; }","duration":"102.816446ms","start":"2024-08-02T19:09:51.768833Z","end":"2024-08-02T19:09:51.87165Z","steps":["trace[157763035] 'agreement among raft nodes before linearized reading'  (duration: 102.775417ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:09:56.210298Z","caller":"traceutil/trace.go:171","msg":"trace[1834743971] transaction","detail":"{read_only:false; response_revision:706; number_of_response:1; }","duration":"108.460106ms","start":"2024-08-02T19:09:56.101822Z","end":"2024-08-02T19:09:56.210282Z","steps":["trace[1834743971] 'process raft request'  (duration: 61.898964ms)","trace[1834743971] 'compare'  (duration: 46.440933ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T19:10:29.436648Z","caller":"traceutil/trace.go:171","msg":"trace[636469253] transaction","detail":"{read_only:false; response_revision:733; number_of_response:1; }","duration":"244.562828ms","start":"2024-08-02T19:10:29.192003Z","end":"2024-08-02T19:10:29.436566Z","steps":["trace[636469253] 'process raft request'  (duration: 244.364689ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:10:36.941634Z","caller":"traceutil/trace.go:171","msg":"trace[1566925280] transaction","detail":"{read_only:false; response_revision:740; number_of_response:1; }","duration":"142.990427ms","start":"2024-08-02T19:10:36.798617Z","end":"2024-08-02T19:10:36.941608Z","steps":["trace[1566925280] 'process raft request'  (duration: 129.413616ms)","trace[1566925280] 'compare'  (duration: 13.372676ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T19:10:37.661101Z","caller":"traceutil/trace.go:171","msg":"trace[1297133510] linearizableReadLoop","detail":"{readStateIndex:815; appliedIndex:814; }","duration":"162.348521ms","start":"2024-08-02T19:10:37.498708Z","end":"2024-08-02T19:10:37.661057Z","steps":["trace[1297133510] 'read index received'  (duration: 162.209922ms)","trace[1297133510] 'applied index is now lower than readState.Index'  (duration: 137.993µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T19:10:37.661284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.539001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T19:10:37.661338Z","caller":"traceutil/trace.go:171","msg":"trace[308625752] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:741; }","duration":"162.6743ms","start":"2024-08-02T19:10:37.498648Z","end":"2024-08-02T19:10:37.661322Z","steps":["trace[308625752] 'agreement among raft nodes before linearized reading'  (duration: 162.512891ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:10:37.661586Z","caller":"traceutil/trace.go:171","msg":"trace[118216297] transaction","detail":"{read_only:false; response_revision:741; number_of_response:1; }","duration":"175.275652ms","start":"2024-08-02T19:10:37.486277Z","end":"2024-08-02T19:10:37.661553Z","steps":["trace[118216297] 'process raft request'  (duration: 174.659394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:11:10.324464Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"317.131182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T19:11:10.324544Z","caller":"traceutil/trace.go:171","msg":"trace[1625272260] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:767; }","duration":"317.260794ms","start":"2024-08-02T19:11:10.00727Z","end":"2024-08-02T19:11:10.324531Z","steps":["trace[1625272260] 'count revisions from in-memory index tree'  (duration: 317.059543ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:11:10.324592Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T19:11:10.007255Z","time spent":"317.321931ms","remote":"127.0.0.1:51926","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":28,"request content":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true "}
	{"level":"info","ts":"2024-08-02T19:11:11.604004Z","caller":"traceutil/trace.go:171","msg":"trace[818783036] linearizableReadLoop","detail":"{readStateIndex:848; appliedIndex:847; }","duration":"113.620833ms","start":"2024-08-02T19:11:11.490366Z","end":"2024-08-02T19:11:11.603987Z","steps":["trace[818783036] 'read index received'  (duration: 46.937532ms)","trace[818783036] 'applied index is now lower than readState.Index'  (duration: 66.682458ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T19:11:11.604226Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.841863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T19:11:11.604405Z","caller":"traceutil/trace.go:171","msg":"trace[548918230] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:767; }","duration":"114.054495ms","start":"2024-08-02T19:11:11.490339Z","end":"2024-08-02T19:11:11.604394Z","steps":["trace[548918230] 'agreement among raft nodes before linearized reading'  (duration: 113.833282ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:11:11.604361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.07225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T19:11:11.60486Z","caller":"traceutil/trace.go:171","msg":"trace[882510175] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:767; }","duration":"106.557551ms","start":"2024-08-02T19:11:11.498279Z","end":"2024-08-02T19:11:11.604837Z","steps":["trace[882510175] 'agreement among raft nodes before linearized reading'  (duration: 106.061843ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:11:26.278989Z","caller":"traceutil/trace.go:171","msg":"trace[1944138405] transaction","detail":"{read_only:false; response_revision:782; number_of_response:1; }","duration":"141.34292ms","start":"2024-08-02T19:11:26.137633Z","end":"2024-08-02T19:11:26.278976Z","steps":["trace[1944138405] 'process raft request'  (duration: 141.292814ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:11:26.279341Z","caller":"traceutil/trace.go:171","msg":"trace[683198444] transaction","detail":"{read_only:false; response_revision:781; number_of_response:1; }","duration":"145.023921ms","start":"2024-08-02T19:11:26.134301Z","end":"2024-08-02T19:11:26.279325Z","steps":["trace[683198444] 'process raft request'  (duration: 113.416914ms)","trace[683198444] 'compare'  (duration: 31.113487ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T19:12:24.837447Z","caller":"traceutil/trace.go:171","msg":"trace[852352281] transaction","detail":"{read_only:false; response_revision:827; number_of_response:1; }","duration":"196.383204ms","start":"2024-08-02T19:12:24.641024Z","end":"2024-08-02T19:12:24.837408Z","steps":["trace[852352281] 'process raft request'  (duration: 195.876576ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:12:25.110286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.187239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T19:12:25.110357Z","caller":"traceutil/trace.go:171","msg":"trace[1516960778] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:827; }","duration":"121.340705ms","start":"2024-08-02T19:12:24.989001Z","end":"2024-08-02T19:12:25.110341Z","steps":["trace[1516960778] 'count revisions from in-memory index tree'  (duration: 121.096566ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:12:26.290318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.453722ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.72.74\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-08-02T19:12:26.290506Z","caller":"traceutil/trace.go:171","msg":"trace[1341978625] range","detail":"{range_begin:/registry/masterleases/192.168.72.74; range_end:; response_count:1; response_revision:828; }","duration":"240.672986ms","start":"2024-08-02T19:12:26.049813Z","end":"2024-08-02T19:12:26.290486Z","steps":["trace[1341978625] 'range keys from in-memory index tree'  (duration: 240.319777ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:12:27.143813Z","caller":"traceutil/trace.go:171","msg":"trace[1781765146] transaction","detail":"{read_only:false; response_revision:831; number_of_response:1; }","duration":"179.543675ms","start":"2024-08-02T19:12:26.964235Z","end":"2024-08-02T19:12:27.143779Z","steps":["trace[1781765146] 'process raft request'  (duration: 179.364885ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:14:33 up 14 min,  0 users,  load average: 0.15, 0.08, 0.04
	Linux embed-certs-757654 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8cfa071ae25817cafbd2505e8cfca69119aecf5c2bda1137fe0f6a11b09725a3] <==
	Trace[1951510320]: [563.952242ms] [563.952242ms] END
	W0802 19:10:14.416624       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:10:14.416745       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0802 19:10:15.417775       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:10:15.417853       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 19:10:15.417867       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:10:15.417948       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:10:15.418033       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 19:10:15.419277       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:11:15.418762       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:11:15.418812       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 19:11:15.418821       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:11:15.420116       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:11:15.420178       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 19:11:15.420187       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:13:15.419320       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:13:15.419404       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 19:13:15.419415       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:13:15.420591       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:13:15.420667       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 19:13:15.420675       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b7aa928d29603a61a99235b6879de943269bca61e4cd2b0573280d3158b18e63] <==
	W0802 19:05:05.641487       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.641487       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.667854       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.748545       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.753340       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.758146       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.762807       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.947403       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.982762       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.993160       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.041404       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.140395       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.165763       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.224930       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.246713       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.334455       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.368580       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.408219       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.410751       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.526717       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.868034       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.871608       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:07.023805       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:07.255873       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:07.403161       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6f2cf313b754a84629c4ddf416bfaea4b6805f308c29cfa47cad78617e76bed0] <==
	I0802 19:09:00.471249       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:09:30.029845       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:09:30.479591       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:10:00.036955       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:10:00.488915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:10:30.042996       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:10:30.498530       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:11:00.050670       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:11:00.507059       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0802 19:11:24.760755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="296.182µs"
	E0802 19:11:30.055773       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:11:30.517141       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0802 19:11:37.759408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="249.417µs"
	E0802 19:12:00.063053       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:12:00.534354       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:12:30.075680       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:12:30.545314       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:13:00.080629       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:13:00.556330       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:13:30.085977       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:13:30.564498       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:14:00.091463       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:14:00.574317       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:14:30.097172       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:14:30.584221       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1c94d117258965a24491514da860bfeeac3202a6a361ab891faf59e6ea3ac6ab] <==
	I0802 19:05:31.622489       1 server_linux.go:69] "Using iptables proxy"
	I0802 19:05:31.654969       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.74"]
	I0802 19:05:31.960766       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 19:05:31.960928       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 19:05:31.960999       1 server_linux.go:165] "Using iptables Proxier"
	I0802 19:05:31.976944       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 19:05:31.992934       1 server.go:872] "Version info" version="v1.30.3"
	I0802 19:05:31.992958       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 19:05:32.020197       1 config.go:192] "Starting service config controller"
	I0802 19:05:32.026938       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 19:05:32.027139       1 config.go:101] "Starting endpoint slice config controller"
	I0802 19:05:32.027170       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 19:05:32.068952       1 config.go:319] "Starting node config controller"
	I0802 19:05:32.069354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 19:05:32.169940       1 shared_informer.go:320] Caches are synced for node config
	I0802 19:05:32.228282       1 shared_informer.go:320] Caches are synced for service config
	I0802 19:05:32.228371       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [85f83ecf32e6a6ff7986504eae145875021a4e9599d9c6d31f135e4b64ba27e7] <==
	W0802 19:05:14.423272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 19:05:14.423295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 19:05:15.286917       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 19:05:15.287117       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 19:05:15.344829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 19:05:15.344948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 19:05:15.352248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 19:05:15.352292       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 19:05:15.476564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 19:05:15.476617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0802 19:05:15.507701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 19:05:15.507756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 19:05:15.639854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0802 19:05:15.639921       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0802 19:05:15.667392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 19:05:15.667437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 19:05:15.688782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 19:05:15.688843       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 19:05:15.724601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 19:05:15.724650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 19:05:15.728026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 19:05:15.728100       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0802 19:05:15.878717       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 19:05:15.878760       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0802 19:05:19.015327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 19:12:16 embed-certs-757654 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 19:12:16 embed-certs-757654 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 19:12:16 embed-certs-757654 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 19:12:16 embed-certs-757654 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:12:18 embed-certs-757654 kubelet[3932]: E0802 19:12:18.740486    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:12:33 embed-certs-757654 kubelet[3932]: E0802 19:12:33.739876    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:12:46 embed-certs-757654 kubelet[3932]: E0802 19:12:46.740010    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:12:58 embed-certs-757654 kubelet[3932]: E0802 19:12:58.742153    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:13:10 embed-certs-757654 kubelet[3932]: E0802 19:13:10.739711    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:13:16 embed-certs-757654 kubelet[3932]: E0802 19:13:16.755332    3932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 19:13:16 embed-certs-757654 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 19:13:16 embed-certs-757654 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 19:13:16 embed-certs-757654 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 19:13:16 embed-certs-757654 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:13:24 embed-certs-757654 kubelet[3932]: E0802 19:13:24.739844    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:13:37 embed-certs-757654 kubelet[3932]: E0802 19:13:37.740645    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:13:51 embed-certs-757654 kubelet[3932]: E0802 19:13:51.739259    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:14:03 embed-certs-757654 kubelet[3932]: E0802 19:14:03.740276    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:14:15 embed-certs-757654 kubelet[3932]: E0802 19:14:15.740122    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:14:16 embed-certs-757654 kubelet[3932]: E0802 19:14:16.755704    3932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 19:14:16 embed-certs-757654 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 19:14:16 embed-certs-757654 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 19:14:16 embed-certs-757654 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 19:14:16 embed-certs-757654 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:14:28 embed-certs-757654 kubelet[3932]: E0802 19:14:28.740428    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	
	
	==> storage-provisioner [cd26613a29e0f2c874d9091019f6fdc7e5d3931e62918e9a6b02299bd15a6aa4] <==
	I0802 19:05:32.316581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 19:05:32.347384       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 19:05:32.347610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 19:05:32.363635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 19:05:32.363937       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-757654_f4afbed8-bed6-4205-87b7-420fc016cfb8!
	I0802 19:05:32.364891       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8c60b342-6881-433c-974d-f7f6e4dc832f", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-757654_f4afbed8-bed6-4205-87b7-420fc016cfb8 became leader
	I0802 19:05:32.467438       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-757654_f4afbed8-bed6-4205-87b7-420fc016cfb8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-757654 -n embed-certs-757654
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-757654 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-d69sk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-757654 describe pod metrics-server-569cc877fc-d69sk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-757654 describe pod metrics-server-569cc877fc-d69sk: exit status 1 (59.48937ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-d69sk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-757654 describe pod metrics-server-569cc877fc-d69sk: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (111.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
E0802 19:07:43.927613   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.104:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.104:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490984 -n old-k8s-version-490984
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 2 (215.959599ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-490984" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-490984 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-490984 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.981µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-490984 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 2 (210.000849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-490984 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-490984        | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:42 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407306                  | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 18:43 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490984             | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504903       | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:53 UTC |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:45 UTC |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-198962             | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-198962                  | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-198962 image list                           | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-684611 | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | disable-driver-mounts-684611                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-757654            | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC | 02 Aug 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-757654                 | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:55 UTC | 02 Aug 24 19:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 18:55:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 18:55:07.300822   63271 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:55:07.301073   63271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:55:07.301083   63271 out.go:304] Setting ErrFile to fd 2...
	I0802 18:55:07.301087   63271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:55:07.301311   63271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:55:07.301870   63271 out.go:298] Setting JSON to false
	I0802 18:55:07.302787   63271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5851,"bootTime":1722619056,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:55:07.302842   63271 start.go:139] virtualization: kvm guest
	I0802 18:55:07.305206   63271 out.go:177] * [embed-certs-757654] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:55:07.306647   63271 notify.go:220] Checking for updates...
	I0802 18:55:07.306680   63271 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:55:07.308191   63271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:55:07.309618   63271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:55:07.310900   63271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:55:07.312292   63271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:55:07.313676   63271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:55:07.315371   63271 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:55:07.315804   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.315868   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.330686   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I0802 18:55:07.331071   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.331554   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.331573   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.331865   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.332028   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.332279   63271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:55:07.332554   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.332586   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.348583   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0802 18:55:07.349036   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.349454   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.349479   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.349841   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.350094   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.386562   63271 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 18:55:07.387914   63271 start.go:297] selected driver: kvm2
	I0802 18:55:07.387927   63271 start.go:901] validating driver "kvm2" against &{Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:55:07.388032   63271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:55:07.388727   63271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:55:07.388793   63271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 18:55:07.403061   63271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 18:55:07.403460   63271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 18:55:07.403517   63271 cni.go:84] Creating CNI manager for ""
	I0802 18:55:07.403530   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 18:55:07.403564   63271 start.go:340] cluster config:
	{Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 18:55:07.403666   63271 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 18:55:07.405667   63271 out.go:177] * Starting "embed-certs-757654" primary control-plane node in "embed-certs-757654" cluster
	I0802 18:55:07.406842   63271 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 18:55:07.406881   63271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 18:55:07.406891   63271 cache.go:56] Caching tarball of preloaded images
	I0802 18:55:07.406977   63271 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 18:55:07.406989   63271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 18:55:07.407139   63271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/config.json ...
	I0802 18:55:07.407354   63271 start.go:360] acquireMachinesLock for embed-certs-757654: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:55:07.407402   63271 start.go:364] duration metric: took 27.558µs to acquireMachinesLock for "embed-certs-757654"
	I0802 18:55:07.407419   63271 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:55:07.407426   63271 fix.go:54] fixHost starting: 
	I0802 18:55:07.407713   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:55:07.407759   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:55:07.421857   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0802 18:55:07.422321   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:55:07.422811   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:55:07.422834   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:55:07.423160   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:55:07.423321   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.423495   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 18:55:07.424925   63271 fix.go:112] recreateIfNeeded on embed-certs-757654: state=Running err=<nil>
	W0802 18:55:07.424950   63271 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:55:07.427128   63271 out.go:177] * Updating the running kvm2 "embed-certs-757654" VM ...
	I0802 18:55:07.428434   63271 machine.go:94] provisionDockerMachine start ...
	I0802 18:55:07.428462   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:55:07.428711   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 18:55:07.431558   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:55:07.432004   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 19:51:03 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 18:55:07.432035   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:55:07.432207   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 18:55:07.432412   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 18:55:07.432600   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 18:55:07.432774   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 18:55:07.432921   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 18:55:07.433139   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 18:55:07.433153   63271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 18:55:10.331372   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:13.403378   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:19.483421   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:22.555412   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:28.635392   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:31.711303   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:40.827373   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:43.899432   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:49.979406   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:53.051366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:55:59.131387   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:02.203356   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:08.283365   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:11.355399   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:17.435474   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:20.507366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:26.587339   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:29.659353   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:35.739335   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:38.811375   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:44.891395   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:47.963426   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:56.424677   58571 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0802 18:56:56.424763   58571 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0802 18:56:56.426349   58571 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0802 18:56:56.426400   58571 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 18:56:56.426486   58571 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 18:56:56.426574   58571 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 18:56:56.426653   58571 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 18:56:56.426705   58571 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 18:56:56.428652   58571 out.go:204]   - Generating certificates and keys ...
	I0802 18:56:56.428741   58571 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 18:56:56.428809   58571 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 18:56:56.428898   58571 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 18:56:56.428972   58571 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 18:56:56.429041   58571 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 18:56:56.429089   58571 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 18:56:56.429161   58571 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 18:56:56.429218   58571 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 18:56:56.429298   58571 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 18:56:56.429380   58571 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 18:56:56.429416   58571 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 18:56:56.429492   58571 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 18:56:56.429535   58571 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 18:56:56.429590   58571 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 18:56:56.429676   58571 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 18:56:56.429736   58571 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 18:56:56.429821   58571 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 18:56:56.429890   58571 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 18:56:56.429950   58571 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 18:56:56.430038   58571 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 18:56:56.431432   58571 out.go:204]   - Booting up control plane ...
	I0802 18:56:56.431529   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 18:56:56.431650   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 18:56:56.431737   58571 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 18:56:56.431820   58571 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 18:56:56.432000   58571 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0802 18:56:56.432070   58571 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0802 18:56:56.432142   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432320   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432400   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432555   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432625   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.432805   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.432899   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433090   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433160   58571 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0802 18:56:56.433309   58571 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0802 18:56:56.433316   58571 kubeadm.go:310] 
	I0802 18:56:56.433357   58571 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0802 18:56:56.433389   58571 kubeadm.go:310] 		timed out waiting for the condition
	I0802 18:56:56.433395   58571 kubeadm.go:310] 
	I0802 18:56:56.433430   58571 kubeadm.go:310] 	This error is likely caused by:
	I0802 18:56:56.433471   58571 kubeadm.go:310] 		- The kubelet is not running
	I0802 18:56:56.433602   58571 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0802 18:56:56.433617   58571 kubeadm.go:310] 
	I0802 18:56:56.433748   58571 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0802 18:56:56.433805   58571 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0802 18:56:56.433854   58571 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0802 18:56:56.433863   58571 kubeadm.go:310] 
	I0802 18:56:56.433949   58571 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0802 18:56:56.434017   58571 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0802 18:56:56.434023   58571 kubeadm.go:310] 
	I0802 18:56:56.434150   58571 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0802 18:56:56.434225   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0802 18:56:56.434317   58571 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0802 18:56:56.434408   58571 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0802 18:56:56.434422   58571 kubeadm.go:310] 
	I0802 18:56:56.434487   58571 kubeadm.go:394] duration metric: took 8m0.865897602s to StartCluster
	I0802 18:56:56.434534   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0802 18:56:56.434606   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0802 18:56:56.480531   58571 cri.go:89] found id: ""
	I0802 18:56:56.480556   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.480564   58571 logs.go:278] No container was found matching "kube-apiserver"
	I0802 18:56:56.480570   58571 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0802 18:56:56.480622   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0802 18:56:56.524218   58571 cri.go:89] found id: ""
	I0802 18:56:56.524249   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.524258   58571 logs.go:278] No container was found matching "etcd"
	I0802 18:56:56.524264   58571 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0802 18:56:56.524318   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0802 18:56:56.563951   58571 cri.go:89] found id: ""
	I0802 18:56:56.563977   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.563984   58571 logs.go:278] No container was found matching "coredns"
	I0802 18:56:56.563990   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0802 18:56:56.564046   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0802 18:56:56.600511   58571 cri.go:89] found id: ""
	I0802 18:56:56.600533   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.600540   58571 logs.go:278] No container was found matching "kube-scheduler"
	I0802 18:56:56.600545   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0802 18:56:56.600607   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0802 18:56:56.634000   58571 cri.go:89] found id: ""
	I0802 18:56:56.634024   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.634032   58571 logs.go:278] No container was found matching "kube-proxy"
	I0802 18:56:56.634038   58571 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0802 18:56:56.634088   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0802 18:56:56.667317   58571 cri.go:89] found id: ""
	I0802 18:56:56.667345   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.667356   58571 logs.go:278] No container was found matching "kube-controller-manager"
	I0802 18:56:56.667364   58571 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0802 18:56:56.667429   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0802 18:56:56.698619   58571 cri.go:89] found id: ""
	I0802 18:56:56.698646   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.698656   58571 logs.go:278] No container was found matching "kindnet"
	I0802 18:56:56.698664   58571 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0802 18:56:56.698726   58571 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0802 18:56:56.730196   58571 cri.go:89] found id: ""
	I0802 18:56:56.730222   58571 logs.go:276] 0 containers: []
	W0802 18:56:56.730239   58571 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0802 18:56:56.730253   58571 logs.go:123] Gathering logs for CRI-O ...
	I0802 18:56:56.730267   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0802 18:56:56.837916   58571 logs.go:123] Gathering logs for container status ...
	I0802 18:56:56.837958   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0802 18:56:56.881210   58571 logs.go:123] Gathering logs for kubelet ...
	I0802 18:56:56.881242   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0802 18:56:56.930673   58571 logs.go:123] Gathering logs for dmesg ...
	I0802 18:56:56.930712   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0802 18:56:56.944039   58571 logs.go:123] Gathering logs for describe nodes ...
	I0802 18:56:56.944072   58571 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0802 18:56:57.026441   58571 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0802 18:56:57.026505   58571 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0802 18:56:57.026546   58571 out.go:239] * 
	W0802 18:56:57.026632   58571 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.026667   58571 out.go:239] * 
	W0802 18:56:57.027538   58571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 18:56:57.031093   58571 out.go:177] 
	W0802 18:56:57.032235   58571 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0802 18:56:57.032305   58571 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0802 18:56:57.032328   58571 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0802 18:56:57.033757   58571 out.go:177] 
	I0802 18:56:54.043379   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:56:57.115474   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:03.195366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:06.267441   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:12.347367   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:15.419454   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:21.499312   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:24.571479   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:30.651392   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:33.723367   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:39.803308   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:42.875410   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:48.959363   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:52.027390   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:57:58.107322   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:01.179384   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:07.259377   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:10.331445   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:16.411350   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:19.483337   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:25.563336   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:28.635436   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:34.715391   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:37.787412   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:43.867364   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:46.939415   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:53.019307   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:58:56.091325   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:02.171408   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:05.247378   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:11.323383   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:14.395379   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:20.475380   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:23.547337   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:29.627318   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:32.699366   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:38.779353   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:41.851395   63271 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.74:22: connect: no route to host
	I0802 18:59:44.853138   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 18:59:44.853196   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 18:59:44.853510   63271 buildroot.go:166] provisioning hostname "embed-certs-757654"
	I0802 18:59:44.853536   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 18:59:44.853769   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 18:59:44.855229   63271 machine.go:97] duration metric: took 4m37.426779586s to provisionDockerMachine
	I0802 18:59:44.855272   63271 fix.go:56] duration metric: took 4m37.44784655s for fixHost
	I0802 18:59:44.855280   63271 start.go:83] releasing machines lock for "embed-certs-757654", held for 4m37.44786842s
	W0802 18:59:44.855294   63271 start.go:714] error starting host: provision: host is not running
	W0802 18:59:44.855364   63271 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0802 18:59:44.855373   63271 start.go:729] Will try again in 5 seconds ...
	I0802 18:59:49.856328   63271 start.go:360] acquireMachinesLock for embed-certs-757654: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 18:59:49.856452   63271 start.go:364] duration metric: took 63.536µs to acquireMachinesLock for "embed-certs-757654"
	I0802 18:59:49.856478   63271 start.go:96] Skipping create...Using existing machine configuration
	I0802 18:59:49.856486   63271 fix.go:54] fixHost starting: 
	I0802 18:59:49.856795   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:59:49.856820   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:59:49.872503   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0802 18:59:49.872935   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:59:49.873429   63271 main.go:141] libmachine: Using API Version  1
	I0802 18:59:49.873455   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:59:49.873775   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:59:49.874015   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 18:59:49.874138   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 18:59:49.875790   63271 fix.go:112] recreateIfNeeded on embed-certs-757654: state=Stopped err=<nil>
	I0802 18:59:49.875812   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	W0802 18:59:49.875968   63271 fix.go:138] unexpected machine state, will restart: <nil>
	I0802 18:59:49.877961   63271 out.go:177] * Restarting existing kvm2 VM for "embed-certs-757654" ...
	I0802 18:59:49.879469   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Start
	I0802 18:59:49.879683   63271 main.go:141] libmachine: (embed-certs-757654) Ensuring networks are active...
	I0802 18:59:49.880355   63271 main.go:141] libmachine: (embed-certs-757654) Ensuring network default is active
	I0802 18:59:49.880655   63271 main.go:141] libmachine: (embed-certs-757654) Ensuring network mk-embed-certs-757654 is active
	I0802 18:59:49.881013   63271 main.go:141] libmachine: (embed-certs-757654) Getting domain xml...
	I0802 18:59:49.881644   63271 main.go:141] libmachine: (embed-certs-757654) Creating domain...
	I0802 18:59:51.107468   63271 main.go:141] libmachine: (embed-certs-757654) Waiting to get IP...
	I0802 18:59:51.108364   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.108809   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.108870   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.108788   64474 retry.go:31] will retry after 219.792683ms: waiting for machine to come up
	I0802 18:59:51.330264   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.330775   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.330798   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.330741   64474 retry.go:31] will retry after 346.067172ms: waiting for machine to come up
	I0802 18:59:51.677951   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.678462   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.678504   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.678436   64474 retry.go:31] will retry after 313.108863ms: waiting for machine to come up
	I0802 18:59:51.992934   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:51.993410   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:51.993439   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:51.993354   64474 retry.go:31] will retry after 427.090188ms: waiting for machine to come up
	I0802 18:59:52.421609   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:52.422050   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:52.422080   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:52.422014   64474 retry.go:31] will retry after 577.531979ms: waiting for machine to come up
	I0802 18:59:53.000756   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:53.001336   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:53.001366   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:53.001280   64474 retry.go:31] will retry after 808.196796ms: waiting for machine to come up
	I0802 18:59:53.811289   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:53.811650   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:53.811674   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:53.811600   64474 retry.go:31] will retry after 906.307667ms: waiting for machine to come up
	I0802 18:59:54.720008   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:54.720637   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:54.720667   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:54.720586   64474 retry.go:31] will retry after 951.768859ms: waiting for machine to come up
	I0802 18:59:55.674137   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:55.674555   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:55.674599   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:55.674505   64474 retry.go:31] will retry after 1.653444272s: waiting for machine to come up
	I0802 18:59:57.329527   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:57.329936   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:57.329962   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:57.329899   64474 retry.go:31] will retry after 1.517025614s: waiting for machine to come up
	I0802 18:59:58.848461   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 18:59:58.848947   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 18:59:58.848991   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 18:59:58.848907   64474 retry.go:31] will retry after 1.930384725s: waiting for machine to come up
	I0802 19:00:00.781462   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:00.781935   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 19:00:00.781965   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 19:00:00.781892   64474 retry.go:31] will retry after 3.609517872s: waiting for machine to come up
	I0802 19:00:04.395801   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:04.396325   63271 main.go:141] libmachine: (embed-certs-757654) DBG | unable to find current IP address of domain embed-certs-757654 in network mk-embed-certs-757654
	I0802 19:00:04.396353   63271 main.go:141] libmachine: (embed-certs-757654) DBG | I0802 19:00:04.396283   64474 retry.go:31] will retry after 4.053197681s: waiting for machine to come up
	I0802 19:00:08.453545   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.454111   63271 main.go:141] libmachine: (embed-certs-757654) Found IP for machine: 192.168.72.74
	I0802 19:00:08.454144   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has current primary IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.454154   63271 main.go:141] libmachine: (embed-certs-757654) Reserving static IP address...
	I0802 19:00:08.454669   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "embed-certs-757654", mac: "52:54:00:d5:0f:4c", ip: "192.168.72.74"} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.454695   63271 main.go:141] libmachine: (embed-certs-757654) DBG | skip adding static IP to network mk-embed-certs-757654 - found existing host DHCP lease matching {name: "embed-certs-757654", mac: "52:54:00:d5:0f:4c", ip: "192.168.72.74"}
	I0802 19:00:08.454709   63271 main.go:141] libmachine: (embed-certs-757654) Reserved static IP address: 192.168.72.74
	I0802 19:00:08.454723   63271 main.go:141] libmachine: (embed-certs-757654) Waiting for SSH to be available...
	I0802 19:00:08.454741   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Getting to WaitForSSH function...
	I0802 19:00:08.457106   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.457426   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.457477   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.457594   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Using SSH client type: external
	I0802 19:00:08.457622   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa (-rw-------)
	I0802 19:00:08.457655   63271 main.go:141] libmachine: (embed-certs-757654) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.74 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 19:00:08.457671   63271 main.go:141] libmachine: (embed-certs-757654) DBG | About to run SSH command:
	I0802 19:00:08.457689   63271 main.go:141] libmachine: (embed-certs-757654) DBG | exit 0
	I0802 19:00:08.583153   63271 main.go:141] libmachine: (embed-certs-757654) DBG | SSH cmd err, output: <nil>: 
	I0802 19:00:08.583546   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetConfigRaw
	I0802 19:00:08.584156   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:08.586987   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.587373   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.587403   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.587628   63271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/config.json ...
	I0802 19:00:08.587836   63271 machine.go:94] provisionDockerMachine start ...
	I0802 19:00:08.587858   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:08.588062   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.590424   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.590765   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.590790   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.590889   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:08.591079   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.591258   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.591427   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:08.591610   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:08.591800   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:08.591815   63271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0802 19:00:08.699598   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0802 19:00:08.699631   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 19:00:08.699874   63271 buildroot.go:166] provisioning hostname "embed-certs-757654"
	I0802 19:00:08.699905   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 19:00:08.700064   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.702828   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.703221   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.703250   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.703426   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:08.703600   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.703751   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.703891   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:08.704036   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:08.704249   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:08.704267   63271 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-757654 && echo "embed-certs-757654" | sudo tee /etc/hostname
	I0802 19:00:08.825824   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-757654
	
	I0802 19:00:08.825854   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.828688   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.829029   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.829059   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.829236   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:08.829456   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.829603   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:08.829752   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:08.829933   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:08.830107   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:08.830124   63271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-757654' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-757654/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-757654' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 19:00:08.949050   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:00:08.949088   63271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 19:00:08.949109   63271 buildroot.go:174] setting up certificates
	I0802 19:00:08.949117   63271 provision.go:84] configureAuth start
	I0802 19:00:08.949135   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetMachineName
	I0802 19:00:08.949433   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:08.952237   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.952545   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.952573   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.952723   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:08.954970   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.955440   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:08.955468   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:08.955644   63271 provision.go:143] copyHostCerts
	I0802 19:00:08.955696   63271 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 19:00:08.955706   63271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 19:00:08.955801   63271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 19:00:08.955926   63271 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 19:00:08.955939   63271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 19:00:08.955970   63271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 19:00:08.956043   63271 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 19:00:08.956051   63271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 19:00:08.956074   63271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 19:00:08.956136   63271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.embed-certs-757654 san=[127.0.0.1 192.168.72.74 embed-certs-757654 localhost minikube]
	I0802 19:00:09.274751   63271 provision.go:177] copyRemoteCerts
	I0802 19:00:09.274811   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 19:00:09.274833   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.277417   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.277757   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.277782   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.277937   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.278139   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.278307   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.278429   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:09.360988   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 19:00:09.383169   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0802 19:00:09.406422   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0802 19:00:09.430412   63271 provision.go:87] duration metric: took 481.276691ms to configureAuth
	I0802 19:00:09.430474   63271 buildroot.go:189] setting minikube options for container-runtime
	I0802 19:00:09.430718   63271 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:00:09.430812   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.433678   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.434068   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.434097   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.434234   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.434458   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.434631   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.434768   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.434952   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:09.435197   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:09.435220   63271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 19:00:09.694497   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 19:00:09.694540   63271 machine.go:97] duration metric: took 1.10669177s to provisionDockerMachine
	I0802 19:00:09.694555   63271 start.go:293] postStartSetup for "embed-certs-757654" (driver="kvm2")
	I0802 19:00:09.694566   63271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 19:00:09.694586   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.694913   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 19:00:09.694938   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.697387   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.697722   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.697765   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.697828   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.698011   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.698159   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.698280   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:09.781383   63271 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 19:00:09.785521   63271 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 19:00:09.785555   63271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 19:00:09.785639   63271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 19:00:09.785760   63271 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 19:00:09.785891   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 19:00:09.796028   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:00:09.820115   63271 start.go:296] duration metric: took 125.544407ms for postStartSetup
	I0802 19:00:09.820156   63271 fix.go:56] duration metric: took 19.963670883s for fixHost
	I0802 19:00:09.820175   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.823086   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.823387   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.823427   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.823600   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.823881   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.824077   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.824217   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.824403   63271 main.go:141] libmachine: Using SSH client type: native
	I0802 19:00:09.824616   63271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.74 22 <nil> <nil>}
	I0802 19:00:09.824627   63271 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 19:00:09.931624   63271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722625209.908806442
	
	I0802 19:00:09.931652   63271 fix.go:216] guest clock: 1722625209.908806442
	I0802 19:00:09.931660   63271 fix.go:229] Guest: 2024-08-02 19:00:09.908806442 +0000 UTC Remote: 2024-08-02 19:00:09.82015998 +0000 UTC m=+302.554066499 (delta=88.646462ms)
	I0802 19:00:09.931680   63271 fix.go:200] guest clock delta is within tolerance: 88.646462ms
	I0802 19:00:09.931686   63271 start.go:83] releasing machines lock for "embed-certs-757654", held for 20.075223098s
	I0802 19:00:09.931706   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.931993   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:09.934694   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.935023   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.935067   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.935214   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.935703   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.935866   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:00:09.935961   63271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 19:00:09.936013   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.936079   63271 ssh_runner.go:195] Run: cat /version.json
	I0802 19:00:09.936100   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:00:09.938619   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.938973   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.938996   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.939017   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.939183   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.939346   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.939541   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.939546   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:09.939566   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:09.939733   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:00:09.939753   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:09.939839   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:00:09.939986   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:00:09.940143   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:00:10.060439   63271 ssh_runner.go:195] Run: systemctl --version
	I0802 19:00:10.066688   63271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 19:00:10.209783   63271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 19:00:10.215441   63271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 19:00:10.215530   63271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 19:00:10.230786   63271 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 19:00:10.230808   63271 start.go:495] detecting cgroup driver to use...
	I0802 19:00:10.230894   63271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 19:00:10.246480   63271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 19:00:10.260637   63271 docker.go:217] disabling cri-docker service (if available) ...
	I0802 19:00:10.260694   63271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 19:00:10.273890   63271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 19:00:10.286949   63271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 19:00:10.396045   63271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 19:00:10.558766   63271 docker.go:233] disabling docker service ...
	I0802 19:00:10.558830   63271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 19:00:10.572592   63271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 19:00:10.585221   63271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 19:00:10.711072   63271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 19:00:10.831806   63271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 19:00:10.853846   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 19:00:10.871644   63271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 19:00:10.871703   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.881356   63271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 19:00:10.881415   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.891537   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.901976   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.911415   63271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 19:00:10.921604   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.931914   63271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.948828   63271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:00:10.958456   63271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 19:00:10.967234   63271 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 19:00:10.967291   63271 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 19:00:10.980348   63271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 19:00:10.989378   63271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:00:11.105254   63271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 19:00:11.241019   63271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 19:00:11.241094   63271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 19:00:11.245512   63271 start.go:563] Will wait 60s for crictl version
	I0802 19:00:11.245560   63271 ssh_runner.go:195] Run: which crictl
	I0802 19:00:11.249126   63271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 19:00:11.287138   63271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 19:00:11.287233   63271 ssh_runner.go:195] Run: crio --version
	I0802 19:00:11.316821   63271 ssh_runner.go:195] Run: crio --version
	I0802 19:00:11.344756   63271 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 19:00:11.346052   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetIP
	I0802 19:00:11.348613   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:11.349012   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:00:11.349040   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:00:11.349288   63271 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0802 19:00:11.353165   63271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:00:11.364518   63271 kubeadm.go:883] updating cluster {Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 19:00:11.364682   63271 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:00:11.364743   63271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:00:11.399565   63271 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 19:00:11.399667   63271 ssh_runner.go:195] Run: which lz4
	I0802 19:00:11.403250   63271 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 19:00:11.406951   63271 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 19:00:11.406982   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 19:00:12.658177   63271 crio.go:462] duration metric: took 1.254950494s to copy over tarball
	I0802 19:00:12.658258   63271 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 19:00:14.794602   63271 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.136306374s)
	I0802 19:00:14.794636   63271 crio.go:469] duration metric: took 2.136431079s to extract the tarball
	I0802 19:00:14.794644   63271 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 19:00:14.831660   63271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:00:14.871909   63271 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 19:00:14.871931   63271 cache_images.go:84] Images are preloaded, skipping loading
	I0802 19:00:14.871939   63271 kubeadm.go:934] updating node { 192.168.72.74 8443 v1.30.3 crio true true} ...
	I0802 19:00:14.872057   63271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-757654 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0802 19:00:14.872134   63271 ssh_runner.go:195] Run: crio config
	I0802 19:00:14.921874   63271 cni.go:84] Creating CNI manager for ""
	I0802 19:00:14.921937   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:00:14.921952   63271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 19:00:14.921978   63271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.74 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-757654 NodeName:embed-certs-757654 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 19:00:14.922146   63271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-757654"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 19:00:14.922224   63271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 19:00:14.931751   63271 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 19:00:14.931818   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 19:00:14.942115   63271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0802 19:00:14.959155   63271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 19:00:14.977137   63271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0802 19:00:14.994426   63271 ssh_runner.go:195] Run: grep 192.168.72.74	control-plane.minikube.internal$ /etc/hosts
	I0802 19:00:14.997882   63271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:00:15.009925   63271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:00:15.117317   63271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:00:15.133773   63271 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654 for IP: 192.168.72.74
	I0802 19:00:15.133798   63271 certs.go:194] generating shared ca certs ...
	I0802 19:00:15.133815   63271 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:00:15.133986   63271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 19:00:15.134036   63271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 19:00:15.134044   63271 certs.go:256] generating profile certs ...
	I0802 19:00:15.134174   63271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/client.key
	I0802 19:00:15.134268   63271 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/apiserver.key.edfbb872
	I0802 19:00:15.134321   63271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/proxy-client.key
	I0802 19:00:15.134471   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 19:00:15.134513   63271 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 19:00:15.134523   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 19:00:15.134559   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 19:00:15.134592   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 19:00:15.134629   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 19:00:15.134680   63271 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:00:15.135580   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 19:00:15.166676   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 19:00:15.198512   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 19:00:15.222007   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 19:00:15.256467   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0802 19:00:15.282024   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 19:00:15.313750   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 19:00:15.336950   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/embed-certs-757654/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 19:00:15.361688   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 19:00:15.385790   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 19:00:15.407897   63271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 19:00:15.432712   63271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 19:00:15.450086   63271 ssh_runner.go:195] Run: openssl version
	I0802 19:00:15.455897   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 19:00:15.466553   63271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 19:00:15.470703   63271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 19:00:15.470764   63271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 19:00:15.476433   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 19:00:15.486297   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 19:00:15.497188   63271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 19:00:15.501643   63271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 19:00:15.501712   63271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 19:00:15.507198   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 19:00:15.517747   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 19:00:15.528337   63271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:00:15.532658   63271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:00:15.532704   63271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:00:15.537982   63271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 19:00:15.547569   63271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 19:00:15.551539   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0802 19:00:15.556863   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0802 19:00:15.562004   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0802 19:00:15.567611   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0802 19:00:15.572837   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0802 19:00:15.577902   63271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0802 19:00:15.583126   63271 kubeadm.go:392] StartCluster: {Name:embed-certs-757654 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-757654 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:00:15.583255   63271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 19:00:15.583325   63271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 19:00:15.618245   63271 cri.go:89] found id: ""
	I0802 19:00:15.618324   63271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 19:00:15.627752   63271 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0802 19:00:15.627774   63271 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0802 19:00:15.627830   63271 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0802 19:00:15.636794   63271 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0802 19:00:15.637893   63271 kubeconfig.go:125] found "embed-certs-757654" server: "https://192.168.72.74:8443"
	I0802 19:00:15.640011   63271 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0802 19:00:15.649091   63271 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.74
	I0802 19:00:15.649122   63271 kubeadm.go:1160] stopping kube-system containers ...
	I0802 19:00:15.649135   63271 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0802 19:00:15.649199   63271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 19:00:15.688167   63271 cri.go:89] found id: ""
	I0802 19:00:15.688231   63271 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0802 19:00:15.707188   63271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 19:00:15.717501   63271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 19:00:15.717523   63271 kubeadm.go:157] found existing configuration files:
	
	I0802 19:00:15.717564   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 19:00:15.726600   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 19:00:15.726648   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 19:00:15.736483   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 19:00:15.745075   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 19:00:15.745137   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 19:00:15.754027   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 19:00:15.762600   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 19:00:15.762650   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 19:00:15.771220   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 19:00:15.779384   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 19:00:15.779450   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 19:00:15.788081   63271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 19:00:15.796772   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:15.902347   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.011025   63271 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.108635171s)
	I0802 19:00:17.011068   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.229454   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.302558   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:17.405239   63271 api_server.go:52] waiting for apiserver process to appear ...
	I0802 19:00:17.405325   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:17.905496   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:18.405716   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:18.906507   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:19.405762   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:19.905447   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:00:19.920906   63271 api_server.go:72] duration metric: took 2.515676455s to wait for apiserver process to appear ...
	I0802 19:00:19.920938   63271 api_server.go:88] waiting for apiserver healthz status ...
	I0802 19:00:19.920965   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.287856   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0802 19:00:22.287881   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0802 19:00:22.287893   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.328293   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0802 19:00:22.328340   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0802 19:00:22.421484   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.426448   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 19:00:22.426493   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 19:00:22.921227   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:22.925796   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 19:00:22.925830   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 19:00:23.421392   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:23.426450   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0802 19:00:23.426474   63271 api_server.go:103] status: https://192.168.72.74:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0802 19:00:23.921015   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:00:23.925369   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 200:
	ok
	I0802 19:00:23.931827   63271 api_server.go:141] control plane version: v1.30.3
	I0802 19:00:23.931850   63271 api_server.go:131] duration metric: took 4.010904656s to wait for apiserver health ...
	I0802 19:00:23.931860   63271 cni.go:84] Creating CNI manager for ""
	I0802 19:00:23.931869   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:00:23.933936   63271 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 19:00:23.935422   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 19:00:23.946751   63271 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 19:00:23.965059   63271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 19:00:23.976719   63271 system_pods.go:59] 8 kube-system pods found
	I0802 19:00:23.976770   63271 system_pods.go:61] "coredns-7db6d8ff4d-dldmc" [fd66a301-73a8-4c3a-9a3c-813d9940c233] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:00:23.976782   63271 system_pods.go:61] "etcd-embed-certs-757654" [5644c343-74c1-4b35-8700-0f75991c1227] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0802 19:00:23.976793   63271 system_pods.go:61] "kube-apiserver-embed-certs-757654" [726eda65-25be-4f4d-9322-e8c285df16b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0802 19:00:23.976801   63271 system_pods.go:61] "kube-controller-manager-embed-certs-757654" [aa23470d-fb61-4a05-ad70-afa56cb3439c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0802 19:00:23.976808   63271 system_pods.go:61] "kube-proxy-k8lnc" [8cedcb95-3796-4c88-9980-74f75e1240f6] Running
	I0802 19:00:23.976816   63271 system_pods.go:61] "kube-scheduler-embed-certs-757654" [1f3f3c29-c680-44d8-8d6f-76a6d5f99eca] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0802 19:00:23.976824   63271 system_pods.go:61] "metrics-server-569cc877fc-8nfts" [fed56acf-7b52-4414-a3cd-003d769368a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0802 19:00:23.976830   63271 system_pods.go:61] "storage-provisioner" [b9e24584-d431-431e-a0ce-4e10c8ed28e7] Running
	I0802 19:00:23.976842   63271 system_pods.go:74] duration metric: took 11.758424ms to wait for pod list to return data ...
	I0802 19:00:23.976851   63271 node_conditions.go:102] verifying NodePressure condition ...
	I0802 19:00:23.980046   63271 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 19:00:23.980077   63271 node_conditions.go:123] node cpu capacity is 2
	I0802 19:00:23.980091   63271 node_conditions.go:105] duration metric: took 3.224494ms to run NodePressure ...
	I0802 19:00:23.980110   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0802 19:00:24.244478   63271 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0802 19:00:24.248352   63271 kubeadm.go:739] kubelet initialised
	I0802 19:00:24.248371   63271 kubeadm.go:740] duration metric: took 3.863328ms waiting for restarted kubelet to initialise ...
	I0802 19:00:24.248380   63271 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:00:24.260573   63271 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:26.266305   63271 pod_ready.go:102] pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:28.267770   63271 pod_ready.go:92] pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:28.267794   63271 pod_ready.go:81] duration metric: took 4.007193958s for pod "coredns-7db6d8ff4d-dldmc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:28.267804   63271 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:30.281164   63271 pod_ready.go:102] pod "etcd-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:30.775554   63271 pod_ready.go:92] pod "etcd-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:30.775577   63271 pod_ready.go:81] duration metric: took 2.507766234s for pod "etcd-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:30.775587   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:31.280678   63271 pod_ready.go:92] pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:31.280706   63271 pod_ready.go:81] duration metric: took 505.111529ms for pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:31.280718   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:33.285821   63271 pod_ready.go:102] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:35.786849   63271 pod_ready.go:102] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:37.787600   63271 pod_ready.go:102] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:38.286212   63271 pod_ready.go:92] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:38.286238   63271 pod_ready.go:81] duration metric: took 7.005511802s for pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.286251   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-k8lnc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.290785   63271 pod_ready.go:92] pod "kube-proxy-k8lnc" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:38.290808   63271 pod_ready.go:81] duration metric: took 4.549071ms for pod "kube-proxy-k8lnc" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.290819   63271 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.294795   63271 pod_ready.go:92] pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:00:38.294818   63271 pod_ready.go:81] duration metric: took 3.989197ms for pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:38.294827   63271 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace to be "Ready" ...
	I0802 19:00:40.301046   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:42.800745   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:45.300974   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:47.301922   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:49.800527   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:51.801849   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:54.301458   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:56.801027   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:00:59.300566   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:01.301544   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:03.801351   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:05.801445   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:08.300706   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:10.801090   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:13.302416   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:15.801900   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:18.301115   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:20.801699   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:23.301191   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:25.801392   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:28.300859   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:30.303055   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:32.801185   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:35.300663   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:37.800850   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:39.801554   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:42.299824   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:44.300915   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:46.301116   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:48.801022   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:50.801265   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:53.301815   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:55.804154   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:01:58.306260   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:00.800350   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:02.801306   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:04.801767   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:06.801850   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:09.300911   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:11.801540   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:13.801899   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:16.301139   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:18.801264   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:20.801310   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:22.801602   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:25.300418   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:27.800576   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:29.801107   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:32.300367   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:34.301544   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:36.800348   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:38.800863   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:41.301210   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:43.800898   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:45.801495   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:47.802115   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:50.300758   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:52.800119   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:54.800742   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:57.300894   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:02:59.301967   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:01.801753   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:04.300020   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:06.301903   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:08.801102   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:10.801655   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:13.301099   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:15.307703   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:17.800572   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:19.800718   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:21.801336   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:23.806594   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:26.300529   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:28.301514   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:30.801418   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:33.300343   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:35.301005   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:37.302055   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:39.800705   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:41.801159   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:43.801333   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:45.801519   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:47.803743   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:50.301107   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:52.302310   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:54.801379   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:56.802698   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:03:59.300329   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:01.302266   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:03.801942   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:06.302523   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:08.800574   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:10.802039   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:12.802886   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:15.307009   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:17.803399   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:20.303980   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:22.801487   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:25.300731   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:27.301890   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:29.801312   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:32.299843   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:34.300651   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:36.301491   63271 pod_ready.go:102] pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace has status "Ready":"False"
	I0802 19:04:38.294999   63271 pod_ready.go:81] duration metric: took 4m0.000155688s for pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace to be "Ready" ...
	E0802 19:04:38.295040   63271 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8nfts" in "kube-system" namespace to be "Ready" (will not retry!)
	I0802 19:04:38.295060   63271 pod_ready.go:38] duration metric: took 4m14.04667112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:04:38.295085   63271 kubeadm.go:597] duration metric: took 4m22.667305395s to restartPrimaryControlPlane
	W0802 19:04:38.295180   63271 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0802 19:04:38.295215   63271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0802 19:05:09.113784   63271 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (30.818542247s)
	I0802 19:05:09.113872   63271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:05:09.132652   63271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 19:05:09.151560   63271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 19:05:09.161782   63271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 19:05:09.161805   63271 kubeadm.go:157] found existing configuration files:
	
	I0802 19:05:09.161852   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 19:05:09.170533   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 19:05:09.170597   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 19:05:09.179443   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 19:05:09.187823   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 19:05:09.187874   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 19:05:09.196537   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 19:05:09.204923   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 19:05:09.204971   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 19:05:09.213510   63271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 19:05:09.221920   63271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 19:05:09.221977   63271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 19:05:09.230545   63271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 19:05:09.279115   63271 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 19:05:09.279216   63271 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 19:05:09.421011   63271 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 19:05:09.421143   63271 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 19:05:09.421309   63271 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 19:05:09.622157   63271 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 19:05:09.624863   63271 out.go:204]   - Generating certificates and keys ...
	I0802 19:05:09.624938   63271 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 19:05:09.625017   63271 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 19:05:09.625115   63271 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0802 19:05:09.625212   63271 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0802 19:05:09.625309   63271 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0802 19:05:09.625401   63271 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0802 19:05:09.625486   63271 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0802 19:05:09.625571   63271 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0802 19:05:09.626114   63271 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0802 19:05:09.626203   63271 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0802 19:05:09.626241   63271 kubeadm.go:310] [certs] Using the existing "sa" key
	I0802 19:05:09.626289   63271 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 19:05:09.822713   63271 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 19:05:10.181638   63271 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 19:05:10.512424   63271 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 19:05:10.714859   63271 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 19:05:10.884498   63271 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 19:05:10.885164   63271 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 19:05:10.887815   63271 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 19:05:10.889716   63271 out.go:204]   - Booting up control plane ...
	I0802 19:05:10.889837   63271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 19:05:10.889952   63271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 19:05:10.890264   63271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 19:05:10.909853   63271 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 19:05:10.910852   63271 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 19:05:10.910923   63271 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 19:05:11.036494   63271 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 19:05:11.036625   63271 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 19:05:11.538395   63271 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.975394ms
	I0802 19:05:11.538496   63271 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 19:05:16.040390   63271 kubeadm.go:310] [api-check] The API server is healthy after 4.501873699s
	I0802 19:05:16.052960   63271 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 19:05:16.071975   63271 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 19:05:16.097491   63271 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 19:05:16.097745   63271 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-757654 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 19:05:16.114782   63271 kubeadm.go:310] [bootstrap-token] Using token: 16dj5v.yumf7pzn1z6g3iqs
	I0802 19:05:16.115985   63271 out.go:204]   - Configuring RBAC rules ...
	I0802 19:05:16.116118   63271 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 19:05:16.120188   63271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 19:05:16.126277   63271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 19:05:16.128999   63271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 19:05:16.131913   63271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 19:05:16.137874   63271 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 19:05:16.448583   63271 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 19:05:16.887723   63271 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 19:05:17.446999   63271 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 19:05:17.448086   63271 kubeadm.go:310] 
	I0802 19:05:17.448166   63271 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 19:05:17.448179   63271 kubeadm.go:310] 
	I0802 19:05:17.448264   63271 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 19:05:17.448272   63271 kubeadm.go:310] 
	I0802 19:05:17.448308   63271 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 19:05:17.448401   63271 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 19:05:17.448471   63271 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 19:05:17.448481   63271 kubeadm.go:310] 
	I0802 19:05:17.448574   63271 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 19:05:17.448584   63271 kubeadm.go:310] 
	I0802 19:05:17.448647   63271 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 19:05:17.448657   63271 kubeadm.go:310] 
	I0802 19:05:17.448723   63271 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 19:05:17.448816   63271 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 19:05:17.448921   63271 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 19:05:17.448937   63271 kubeadm.go:310] 
	I0802 19:05:17.449030   63271 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 19:05:17.449105   63271 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 19:05:17.449111   63271 kubeadm.go:310] 
	I0802 19:05:17.449187   63271 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 16dj5v.yumf7pzn1z6g3iqs \
	I0802 19:05:17.449311   63271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 19:05:17.449357   63271 kubeadm.go:310] 	--control-plane 
	I0802 19:05:17.449366   63271 kubeadm.go:310] 
	I0802 19:05:17.449480   63271 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 19:05:17.449496   63271 kubeadm.go:310] 
	I0802 19:05:17.449581   63271 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 16dj5v.yumf7pzn1z6g3iqs \
	I0802 19:05:17.449681   63271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 19:05:17.450848   63271 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 19:05:17.450880   63271 cni.go:84] Creating CNI manager for ""
	I0802 19:05:17.450894   63271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:05:17.452619   63271 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 19:05:17.453986   63271 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 19:05:17.465774   63271 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 19:05:17.490077   63271 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 19:05:17.490204   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:17.490227   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-757654 minikube.k8s.io/updated_at=2024_08_02T19_05_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=embed-certs-757654 minikube.k8s.io/primary=true
	I0802 19:05:17.667909   63271 ops.go:34] apiserver oom_adj: -16
	I0802 19:05:17.668050   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:18.169144   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:18.668337   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:19.168306   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:19.669016   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:20.168693   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:20.668360   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:21.169136   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:21.668931   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:22.168445   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:22.668373   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:23.168654   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:23.668818   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:24.168975   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:24.668943   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:25.168934   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:25.669051   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:26.169075   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:26.668512   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:27.168715   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:27.669044   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:28.169018   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:28.668155   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:29.169111   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:29.669117   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:30.168732   63271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:05:30.251617   63271 kubeadm.go:1113] duration metric: took 12.761473169s to wait for elevateKubeSystemPrivileges
	I0802 19:05:30.251659   63271 kubeadm.go:394] duration metric: took 5m14.668560428s to StartCluster
	I0802 19:05:30.251683   63271 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:05:30.251781   63271 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:05:30.253864   63271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:05:30.254120   63271 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.74 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:05:30.254228   63271 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 19:05:30.254286   63271 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-757654"
	I0802 19:05:30.254296   63271 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:05:30.254323   63271 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-757654"
	W0802 19:05:30.254333   63271 addons.go:243] addon storage-provisioner should already be in state true
	I0802 19:05:30.254351   63271 addons.go:69] Setting default-storageclass=true in profile "embed-certs-757654"
	I0802 19:05:30.254363   63271 addons.go:69] Setting metrics-server=true in profile "embed-certs-757654"
	I0802 19:05:30.254400   63271 addons.go:234] Setting addon metrics-server=true in "embed-certs-757654"
	I0802 19:05:30.254403   63271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-757654"
	W0802 19:05:30.254410   63271 addons.go:243] addon metrics-server should already be in state true
	I0802 19:05:30.254436   63271 host.go:66] Checking if "embed-certs-757654" exists ...
	I0802 19:05:30.254366   63271 host.go:66] Checking if "embed-certs-757654" exists ...
	I0802 19:05:30.254785   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.254820   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.254855   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.254884   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.254887   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.254928   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.256100   63271 out.go:177] * Verifying Kubernetes components...
	I0802 19:05:30.257487   63271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:05:30.270795   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46875
	I0802 19:05:30.271280   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37947
	I0802 19:05:30.271505   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.271784   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.272204   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.272229   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.272368   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.272401   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.272592   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.272737   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.273157   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I0802 19:05:30.273182   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.273226   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.273354   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.273386   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.273519   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.273996   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.274026   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.274365   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.274563   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 19:05:30.278582   63271 addons.go:234] Setting addon default-storageclass=true in "embed-certs-757654"
	W0802 19:05:30.278609   63271 addons.go:243] addon default-storageclass should already be in state true
	I0802 19:05:30.278640   63271 host.go:66] Checking if "embed-certs-757654" exists ...
	I0802 19:05:30.279018   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.279059   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.290269   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0802 19:05:30.291002   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.291611   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.291631   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.291674   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41475
	I0802 19:05:30.292009   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.292112   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.292207   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 19:05:30.292748   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.292765   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.293075   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.293312   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 19:05:30.294125   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42109
	I0802 19:05:30.294477   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.295166   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.295190   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.295632   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.296279   63271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:05:30.296442   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:05:30.296487   63271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:05:30.296864   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:05:30.298655   63271 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0802 19:05:30.298658   63271 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 19:05:30.300094   63271 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0802 19:05:30.300112   63271 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0802 19:05:30.300133   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:05:30.300247   63271 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:05:30.300271   63271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 19:05:30.300294   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:05:30.304247   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.304746   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.304761   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:05:30.304783   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.305074   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:05:30.305142   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:05:30.305165   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.305413   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:05:30.305517   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:05:30.305629   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:05:30.305688   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:05:30.305850   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:05:30.305908   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:05:30.306275   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:05:30.317504   63271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I0802 19:05:30.317941   63271 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:05:30.318474   63271 main.go:141] libmachine: Using API Version  1
	I0802 19:05:30.318491   63271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:05:30.318858   63271 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:05:30.319055   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetState
	I0802 19:05:30.321556   63271 main.go:141] libmachine: (embed-certs-757654) Calling .DriverName
	I0802 19:05:30.321929   63271 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 19:05:30.321940   63271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 19:05:30.321955   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHHostname
	I0802 19:05:30.325005   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.325489   63271 main.go:141] libmachine: (embed-certs-757654) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:0f:4c", ip: ""} in network mk-embed-certs-757654: {Iface:virbr4 ExpiryTime:2024-08-02 20:00:00 +0000 UTC Type:0 Mac:52:54:00:d5:0f:4c Iaid: IPaddr:192.168.72.74 Prefix:24 Hostname:embed-certs-757654 Clientid:01:52:54:00:d5:0f:4c}
	I0802 19:05:30.325507   63271 main.go:141] libmachine: (embed-certs-757654) DBG | domain embed-certs-757654 has defined IP address 192.168.72.74 and MAC address 52:54:00:d5:0f:4c in network mk-embed-certs-757654
	I0802 19:05:30.325710   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHPort
	I0802 19:05:30.325887   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHKeyPath
	I0802 19:05:30.326077   63271 main.go:141] libmachine: (embed-certs-757654) Calling .GetSSHUsername
	I0802 19:05:30.326244   63271 sshutil.go:53] new ssh client: &{IP:192.168.72.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/embed-certs-757654/id_rsa Username:docker}
	I0802 19:05:30.427644   63271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:05:30.447261   63271 node_ready.go:35] waiting up to 6m0s for node "embed-certs-757654" to be "Ready" ...
	I0802 19:05:30.455056   63271 node_ready.go:49] node "embed-certs-757654" has status "Ready":"True"
	I0802 19:05:30.455077   63271 node_ready.go:38] duration metric: took 7.781034ms for node "embed-certs-757654" to be "Ready" ...
	I0802 19:05:30.455088   63271 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:05:30.459517   63271 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.464549   63271 pod_ready.go:92] pod "etcd-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:05:30.464574   63271 pod_ready.go:81] duration metric: took 5.029953ms for pod "etcd-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.464583   63271 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.469443   63271 pod_ready.go:92] pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:05:30.469477   63271 pod_ready.go:81] duration metric: took 4.883324ms for pod "kube-apiserver-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.469492   63271 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.474900   63271 pod_ready.go:92] pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:05:30.474924   63271 pod_ready.go:81] duration metric: took 5.424192ms for pod "kube-controller-manager-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.474933   63271 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.481860   63271 pod_ready.go:92] pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace has status "Ready":"True"
	I0802 19:05:30.481880   63271 pod_ready.go:81] duration metric: took 6.940862ms for pod "kube-scheduler-embed-certs-757654" in "kube-system" namespace to be "Ready" ...
	I0802 19:05:30.481890   63271 pod_ready.go:38] duration metric: took 26.786983ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:05:30.481904   63271 api_server.go:52] waiting for apiserver process to appear ...
	I0802 19:05:30.481954   63271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:05:30.501252   63271 api_server.go:72] duration metric: took 247.089995ms to wait for apiserver process to appear ...
	I0802 19:05:30.501297   63271 api_server.go:88] waiting for apiserver healthz status ...
	I0802 19:05:30.501319   63271 api_server.go:253] Checking apiserver healthz at https://192.168.72.74:8443/healthz ...
	I0802 19:05:30.506521   63271 api_server.go:279] https://192.168.72.74:8443/healthz returned 200:
	ok
	I0802 19:05:30.507590   63271 api_server.go:141] control plane version: v1.30.3
	I0802 19:05:30.507613   63271 api_server.go:131] duration metric: took 6.307506ms to wait for apiserver health ...
	I0802 19:05:30.507622   63271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 19:05:30.559683   63271 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0802 19:05:30.559711   63271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0802 19:05:30.564451   63271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:05:30.617129   63271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 19:05:30.639218   63271 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0802 19:05:30.639250   63271 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0802 19:05:30.666665   63271 system_pods.go:59] 5 kube-system pods found
	I0802 19:05:30.666692   63271 system_pods.go:61] "etcd-embed-certs-757654" [b7bffd63-937a-4cd2-8eaa-33b93f526960] Running
	I0802 19:05:30.666697   63271 system_pods.go:61] "kube-apiserver-embed-certs-757654" [79a15028-c9b4-49e4-9e5a-bb1bfe2c303e] Running
	I0802 19:05:30.666700   63271 system_pods.go:61] "kube-controller-manager-embed-certs-757654" [7bfda970-108c-4494-b1e2-07f3a05e2d93] Running
	I0802 19:05:30.666705   63271 system_pods.go:61] "kube-proxy-8w67s" [b3d73c44-1601-4c2f-8399-259dbcd18813] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0802 19:05:30.666709   63271 system_pods.go:61] "kube-scheduler-embed-certs-757654" [aca4a7c4-4705-47df-982c-0ef501e67852] Running
	I0802 19:05:30.666717   63271 system_pods.go:74] duration metric: took 159.089874ms to wait for pod list to return data ...
	I0802 19:05:30.666724   63271 default_sa.go:34] waiting for default service account to be created ...
	I0802 19:05:30.702756   63271 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 19:05:30.702788   63271 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0802 19:05:30.751529   63271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0802 19:05:30.878159   63271 default_sa.go:45] found service account: "default"
	I0802 19:05:30.878187   63271 default_sa.go:55] duration metric: took 211.457433ms for default service account to be created ...
	I0802 19:05:30.878198   63271 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 19:05:31.060423   63271 system_pods.go:86] 7 kube-system pods found
	I0802 19:05:31.060453   63271 system_pods.go:89] "coredns-7db6d8ff4d-bm67n" [97410089-9b08-4ea7-9636-ce635935858f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.060461   63271 system_pods.go:89] "coredns-7db6d8ff4d-rfg9v" [1511162d-2bd2-490f-b789-925b904bd691] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.060466   63271 system_pods.go:89] "etcd-embed-certs-757654" [b7bffd63-937a-4cd2-8eaa-33b93f526960] Running
	I0802 19:05:31.060472   63271 system_pods.go:89] "kube-apiserver-embed-certs-757654" [79a15028-c9b4-49e4-9e5a-bb1bfe2c303e] Running
	I0802 19:05:31.060476   63271 system_pods.go:89] "kube-controller-manager-embed-certs-757654" [7bfda970-108c-4494-b1e2-07f3a05e2d93] Running
	I0802 19:05:31.060481   63271 system_pods.go:89] "kube-proxy-8w67s" [b3d73c44-1601-4c2f-8399-259dbcd18813] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0802 19:05:31.060485   63271 system_pods.go:89] "kube-scheduler-embed-certs-757654" [aca4a7c4-4705-47df-982c-0ef501e67852] Running
	I0802 19:05:31.060510   63271 retry.go:31] will retry after 244.863307ms: missing components: kube-dns, kube-proxy
	I0802 19:05:31.313026   63271 system_pods.go:86] 7 kube-system pods found
	I0802 19:05:31.313075   63271 system_pods.go:89] "coredns-7db6d8ff4d-bm67n" [97410089-9b08-4ea7-9636-ce635935858f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.313094   63271 system_pods.go:89] "coredns-7db6d8ff4d-rfg9v" [1511162d-2bd2-490f-b789-925b904bd691] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.313103   63271 system_pods.go:89] "etcd-embed-certs-757654" [b7bffd63-937a-4cd2-8eaa-33b93f526960] Running
	I0802 19:05:31.313111   63271 system_pods.go:89] "kube-apiserver-embed-certs-757654" [79a15028-c9b4-49e4-9e5a-bb1bfe2c303e] Running
	I0802 19:05:31.313119   63271 system_pods.go:89] "kube-controller-manager-embed-certs-757654" [7bfda970-108c-4494-b1e2-07f3a05e2d93] Running
	I0802 19:05:31.313130   63271 system_pods.go:89] "kube-proxy-8w67s" [b3d73c44-1601-4c2f-8399-259dbcd18813] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0802 19:05:31.313141   63271 system_pods.go:89] "kube-scheduler-embed-certs-757654" [aca4a7c4-4705-47df-982c-0ef501e67852] Running
	I0802 19:05:31.313162   63271 retry.go:31] will retry after 359.054186ms: missing components: kube-dns, kube-proxy
	I0802 19:05:31.476794   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:31.476831   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:31.476844   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:31.476881   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:31.477155   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:31.477211   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:31.477227   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:31.477235   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:31.477385   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Closing plugin on server side
	I0802 19:05:31.477404   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:31.477425   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:31.477437   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:31.477446   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:31.477466   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:31.477477   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:31.477651   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Closing plugin on server side
	I0802 19:05:31.477705   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:31.477718   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:31.503788   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:31.503817   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:31.504094   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:31.504110   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:31.504149   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Closing plugin on server side
	I0802 19:05:31.682829   63271 system_pods.go:86] 8 kube-system pods found
	I0802 19:05:31.682863   63271 system_pods.go:89] "coredns-7db6d8ff4d-bm67n" [97410089-9b08-4ea7-9636-ce635935858f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.682874   63271 system_pods.go:89] "coredns-7db6d8ff4d-rfg9v" [1511162d-2bd2-490f-b789-925b904bd691] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:31.682881   63271 system_pods.go:89] "etcd-embed-certs-757654" [b7bffd63-937a-4cd2-8eaa-33b93f526960] Running
	I0802 19:05:31.682888   63271 system_pods.go:89] "kube-apiserver-embed-certs-757654" [79a15028-c9b4-49e4-9e5a-bb1bfe2c303e] Running
	I0802 19:05:31.682896   63271 system_pods.go:89] "kube-controller-manager-embed-certs-757654" [7bfda970-108c-4494-b1e2-07f3a05e2d93] Running
	I0802 19:05:31.682904   63271 system_pods.go:89] "kube-proxy-8w67s" [b3d73c44-1601-4c2f-8399-259dbcd18813] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0802 19:05:31.682911   63271 system_pods.go:89] "kube-scheduler-embed-certs-757654" [aca4a7c4-4705-47df-982c-0ef501e67852] Running
	I0802 19:05:31.682920   63271 system_pods.go:89] "storage-provisioner" [d3300a13-9ee5-4eeb-9e21-9ef40aad1379] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0802 19:05:31.682958   63271 retry.go:31] will retry after 403.454792ms: missing components: kube-dns, kube-proxy
	I0802 19:05:32.029198   63271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.277620332s)
	I0802 19:05:32.029247   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:32.029262   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:32.029731   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:32.029756   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:32.029768   63271 main.go:141] libmachine: Making call to close driver server
	I0802 19:05:32.029778   63271 main.go:141] libmachine: (embed-certs-757654) Calling .Close
	I0802 19:05:32.029783   63271 main.go:141] libmachine: (embed-certs-757654) DBG | Closing plugin on server side
	I0802 19:05:32.030050   63271 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:05:32.030087   63271 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:05:32.030101   63271 addons.go:475] Verifying addon metrics-server=true in "embed-certs-757654"
	I0802 19:05:32.032554   63271 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0802 19:05:32.033965   63271 addons.go:510] duration metric: took 1.779739471s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0802 19:05:32.110124   63271 system_pods.go:86] 9 kube-system pods found
	I0802 19:05:32.110154   63271 system_pods.go:89] "coredns-7db6d8ff4d-bm67n" [97410089-9b08-4ea7-9636-ce635935858f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:32.110161   63271 system_pods.go:89] "coredns-7db6d8ff4d-rfg9v" [1511162d-2bd2-490f-b789-925b904bd691] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0802 19:05:32.110169   63271 system_pods.go:89] "etcd-embed-certs-757654" [b7bffd63-937a-4cd2-8eaa-33b93f526960] Running
	I0802 19:05:32.110174   63271 system_pods.go:89] "kube-apiserver-embed-certs-757654" [79a15028-c9b4-49e4-9e5a-bb1bfe2c303e] Running
	I0802 19:05:32.110179   63271 system_pods.go:89] "kube-controller-manager-embed-certs-757654" [7bfda970-108c-4494-b1e2-07f3a05e2d93] Running
	I0802 19:05:32.110183   63271 system_pods.go:89] "kube-proxy-8w67s" [b3d73c44-1601-4c2f-8399-259dbcd18813] Running
	I0802 19:05:32.110187   63271 system_pods.go:89] "kube-scheduler-embed-certs-757654" [aca4a7c4-4705-47df-982c-0ef501e67852] Running
	I0802 19:05:32.110193   63271 system_pods.go:89] "metrics-server-569cc877fc-d69sk" [4d7a8428-5611-44a4-93a7-4440735668f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0802 19:05:32.110198   63271 system_pods.go:89] "storage-provisioner" [d3300a13-9ee5-4eeb-9e21-9ef40aad1379] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0802 19:05:32.110205   63271 system_pods.go:126] duration metric: took 1.232002006s to wait for k8s-apps to be running ...
	I0802 19:05:32.110213   63271 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 19:05:32.110255   63271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:05:32.134588   63271 system_svc.go:56] duration metric: took 24.363295ms WaitForService to wait for kubelet
	I0802 19:05:32.134625   63271 kubeadm.go:582] duration metric: took 1.880469395s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:05:32.134649   63271 node_conditions.go:102] verifying NodePressure condition ...
	I0802 19:05:32.149396   63271 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 19:05:32.149432   63271 node_conditions.go:123] node cpu capacity is 2
	I0802 19:05:32.149449   63271 node_conditions.go:105] duration metric: took 14.794217ms to run NodePressure ...
	I0802 19:05:32.149465   63271 start.go:241] waiting for startup goroutines ...
	I0802 19:05:32.149477   63271 start.go:246] waiting for cluster config update ...
	I0802 19:05:32.149492   63271 start.go:255] writing updated cluster config ...
	I0802 19:05:32.149833   63271 ssh_runner.go:195] Run: rm -f paused
	I0802 19:05:32.199651   63271 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 19:05:32.201132   63271 out.go:177] * Done! kubectl is now configured to use "embed-certs-757654" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.779068331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625670779043656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=262cbf24-fbba-45c1-92e2-ee6d2319b50a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.779551468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d7721f0-c489-4a5b-be6f-da2ff28536ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.779637840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d7721f0-c489-4a5b-be6f-da2ff28536ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.779684023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8d7721f0-c489-4a5b-be6f-da2ff28536ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.809012512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c0ad068-5817-4bfa-a054-ed49eca62a7f name=/runtime.v1.RuntimeService/Version
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.809079144Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c0ad068-5817-4bfa-a054-ed49eca62a7f name=/runtime.v1.RuntimeService/Version
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.810314563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3968e5a-ffec-4717-9e17-7b8b23b07e90 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.810742546Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625670810720662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3968e5a-ffec-4717-9e17-7b8b23b07e90 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.811248387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e62812e-7607-4af3-8ed1-ac12ec941a7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.811293901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e62812e-7607-4af3-8ed1-ac12ec941a7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.811324928Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2e62812e-7607-4af3-8ed1-ac12ec941a7f name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.839748197Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8887f0dd-00f3-4c5a-8e2c-23414af42c06 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.839842918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8887f0dd-00f3-4c5a-8e2c-23414af42c06 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.841072909Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cec72093-5d8f-403d-80cd-72abf69a9577 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.841587780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625670841559582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cec72093-5d8f-403d-80cd-72abf69a9577 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.842156648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e82af85-6f5e-441c-a167-a586776b2385 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.842267444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e82af85-6f5e-441c-a167-a586776b2385 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.842306041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1e82af85-6f5e-441c-a167-a586776b2385 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.871455293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=030f11f6-c991-493b-a7cf-34a42e4c9b23 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.871523806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=030f11f6-c991-493b-a7cf-34a42e4c9b23 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.872571265Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4f25fbf-1300-465c-825f-4a965b9e8880 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.872992377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722625670872972312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4f25fbf-1300-465c-825f-4a965b9e8880 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.873438276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b136b468-9bb4-4a73-807b-8ff9941401bd name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.873498786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b136b468-9bb4-4a73-807b-8ff9941401bd name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:07:50 old-k8s-version-490984 crio[651]: time="2024-08-02 19:07:50.873536341Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b136b468-9bb4-4a73-807b-8ff9941401bd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 2 18:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051059] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037584] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.690028] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.750688] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.557853] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.754585] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.059665] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060053] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.196245] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.132013] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.247678] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +5.903520] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.064556] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.958055] systemd-fstab-generator[963]: Ignoring "noauto" option for root device
	[Aug 2 18:49] kauditd_printk_skb: 46 callbacks suppressed
	[Aug 2 18:52] systemd-fstab-generator[4990]: Ignoring "noauto" option for root device
	[Aug 2 18:55] systemd-fstab-generator[5277]: Ignoring "noauto" option for root device
	[  +0.065921] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:07:51 up 19 min,  0 users,  load average: 0.05, 0.03, 0.00
	Linux old-k8s-version-490984 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00062c6f0)
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007adef0, 0x4f0ac20, 0xc0003df7c0, 0x1, 0xc0001020c0)
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000256540, 0xc0001020c0)
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00095adf0, 0xc000927700)
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6710]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 02 19:07:46 old-k8s-version-490984 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 02 19:07:46 old-k8s-version-490984 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 02 19:07:46 old-k8s-version-490984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 133.
	Aug 02 19:07:46 old-k8s-version-490984 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 02 19:07:46 old-k8s-version-490984 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6719]: I0802 19:07:46.838672    6719 server.go:416] Version: v1.20.0
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6719]: I0802 19:07:46.838960    6719 server.go:837] Client rotation is on, will bootstrap in background
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6719]: I0802 19:07:46.840758    6719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6719]: W0802 19:07:46.841751    6719 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 02 19:07:46 old-k8s-version-490984 kubelet[6719]: I0802 19:07:46.841885    6719 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490984 -n old-k8s-version-490984
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 2 (211.419418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-490984" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (111.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-407306 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306: exit status 2 (217.057742ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-407306 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/VerifyKubernetesImages logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-490984             | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504903       | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:53 UTC |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:45 UTC |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-198962             | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-198962                  | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-198962 image list                           | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-684611 | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | disable-driver-mounts-684611                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-757654            | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC | 02 Aug 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-757654                 | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:55 UTC | 02 Aug 24 19:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC | 02 Aug 24 19:07 UTC |
	| start   | -p auto-800809 --memory=3072                           | auto-800809                  | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | no-preload-407306 image list                           | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC | 02 Aug 24 19:07 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 19:07:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 19:07:52.387737   66688 out.go:291] Setting OutFile to fd 1 ...
	I0802 19:07:52.388015   66688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:07:52.388029   66688 out.go:304] Setting ErrFile to fd 2...
	I0802 19:07:52.388035   66688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:07:52.388298   66688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 19:07:52.388891   66688 out.go:298] Setting JSON to false
	I0802 19:07:52.389794   66688 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6616,"bootTime":1722619056,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 19:07:52.389852   66688 start.go:139] virtualization: kvm guest
	I0802 19:07:52.392168   66688 out.go:177] * [auto-800809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 19:07:52.393615   66688 notify.go:220] Checking for updates...
	I0802 19:07:52.393640   66688 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 19:07:52.395373   66688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 19:07:52.396717   66688 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:07:52.398086   66688 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:07:52.399452   66688 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 19:07:52.400729   66688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 19:07:52.402521   66688 config.go:182] Loaded profile config "default-k8s-diff-port-504903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:07:52.402614   66688 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:07:52.402703   66688 config.go:182] Loaded profile config "no-preload-407306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 19:07:52.402771   66688 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 19:07:52.439687   66688 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 19:07:52.440964   66688 start.go:297] selected driver: kvm2
	I0802 19:07:52.440978   66688 start.go:901] validating driver "kvm2" against <nil>
	I0802 19:07:52.440991   66688 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 19:07:52.441693   66688 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:07:52.441767   66688 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 19:07:52.457467   66688 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 19:07:52.457540   66688 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 19:07:52.457796   66688 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:07:52.457864   66688 cni.go:84] Creating CNI manager for ""
	I0802 19:07:52.457882   66688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:07:52.457893   66688 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 19:07:52.457980   66688 start.go:340] cluster config:
	{Name:auto-800809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:07:52.458099   66688 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:07:52.459694   66688 out.go:177] * Starting "auto-800809" primary control-plane node in "auto-800809" cluster
	I0802 19:07:52.460849   66688 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:07:52.460888   66688 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 19:07:52.460898   66688 cache.go:56] Caching tarball of preloaded images
	I0802 19:07:52.461002   66688 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 19:07:52.461012   66688 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 19:07:52.461103   66688 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/config.json ...
	I0802 19:07:52.461119   66688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/config.json: {Name:mkbc30c2051290c26315ed28bd3a600c251b421b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:07:52.461236   66688 start.go:360] acquireMachinesLock for auto-800809: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 19:07:52.461262   66688 start.go:364] duration metric: took 14.697µs to acquireMachinesLock for "auto-800809"
	I0802 19:07:52.461277   66688 start.go:93] Provisioning new machine with config: &{Name:auto-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:07:52.461334   66688 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Aug 02 18:49:43 minikube systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:43 minikube systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 02 18:49:51 no-preload-407306 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:51 no-preload-407306 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:57Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:57Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0802 19:07:57.556802     775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:07:57.558426     775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:07:57.560036     775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:07:57.561564     775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:07:57.562904     775 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 2 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052268] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038133] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.175966] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.956805] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +0.895840] overlayfs: failed to resolve '/var/lib/containers/storage/overlay/compat441482906/lower1': -2
	[  +0.695966] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 2 18:50] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> kernel <==
	 19:07:57 up 18 min,  0 users,  load average: 0.08, 0.02, 0.01
	Linux no-preload-407306 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 19:07:57.192269   67031 logs.go:273] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:57Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:57Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:57.225250   67031 logs.go:273] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:57Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:57Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:57.256464   67031 logs.go:273] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:57Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:57Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:57.287398   67031 logs.go:273] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:57Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:57Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:57.317564   67031 logs.go:273] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:57Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:57Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:57.348383   67031 logs.go:273] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:57Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:57Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:57.386516   67031 logs.go:273] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:57Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:57Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:07:57.427258   67031 logs.go:273] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:07:57Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:07:57Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:07:57Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306: exit status 2 (211.268765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "no-preload-407306" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-407306 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-407306 --alsologtostderr -v=1: exit status 80 (2.062010319s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-407306 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 19:07:57.887087   67086 out.go:291] Setting OutFile to fd 1 ...
	I0802 19:07:57.887225   67086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:07:57.887234   67086 out.go:304] Setting ErrFile to fd 2...
	I0802 19:07:57.887251   67086 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:07:57.887437   67086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 19:07:57.887648   67086 out.go:298] Setting JSON to false
	I0802 19:07:57.887665   67086 mustload.go:65] Loading cluster: no-preload-407306
	I0802 19:07:57.887981   67086 config.go:182] Loaded profile config "no-preload-407306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 19:07:57.888439   67086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:07:57.888480   67086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:07:57.904225   67086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37797
	I0802 19:07:57.904720   67086 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:07:57.905449   67086 main.go:141] libmachine: Using API Version  1
	I0802 19:07:57.905478   67086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:07:57.905864   67086 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:07:57.906181   67086 main.go:141] libmachine: (no-preload-407306) Calling .GetState
	I0802 19:07:57.907775   67086 host.go:66] Checking if "no-preload-407306" exists ...
	I0802 19:07:57.908070   67086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:07:57.908105   67086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:07:57.923507   67086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0802 19:07:57.924031   67086 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:07:57.924502   67086 main.go:141] libmachine: Using API Version  1
	I0802 19:07:57.924526   67086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:07:57.924908   67086 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:07:57.925148   67086 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	I0802 19:07:57.926248   67086 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.33.1-1722420371-19355/minikube-v1.33.1-1722420371-19355-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.33.1-1722420371-19355-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///syste
m listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/home/jenkins:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-407306 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bo
ol=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0802 19:07:57.929653   67086 out.go:177] * Pausing node no-preload-407306 ... 
	I0802 19:07:57.931132   67086 host.go:66] Checking if "no-preload-407306" exists ...
	I0802 19:07:57.931453   67086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:07:57.931494   67086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:07:57.946847   67086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41407
	I0802 19:07:57.947249   67086 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:07:57.947729   67086 main.go:141] libmachine: Using API Version  1
	I0802 19:07:57.947759   67086 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:07:57.948070   67086 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:07:57.948271   67086 main.go:141] libmachine: (no-preload-407306) Calling .DriverName
	I0802 19:07:57.948471   67086 ssh_runner.go:195] Run: systemctl --version
	I0802 19:07:57.948504   67086 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHHostname
	I0802 19:07:57.951678   67086 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 19:07:57.952127   67086 main.go:141] libmachine: (no-preload-407306) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bd:56:69", ip: ""} in network mk-no-preload-407306: {Iface:virbr3 ExpiryTime:2024-08-02 19:49:42 +0000 UTC Type:0 Mac:52:54:00:bd:56:69 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:no-preload-407306 Clientid:01:52:54:00:bd:56:69}
	I0802 19:07:57.952148   67086 main.go:141] libmachine: (no-preload-407306) DBG | domain no-preload-407306 has defined IP address 192.168.39.168 and MAC address 52:54:00:bd:56:69 in network mk-no-preload-407306
	I0802 19:07:57.952344   67086 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHPort
	I0802 19:07:57.952543   67086 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHKeyPath
	I0802 19:07:57.952679   67086 main.go:141] libmachine: (no-preload-407306) Calling .GetSSHUsername
	I0802 19:07:57.952844   67086 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/no-preload-407306/id_rsa Username:docker}
	I0802 19:07:58.029741   67086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:07:58.043182   67086 pause.go:51] kubelet running: false
	I0802 19:07:58.043244   67086 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0802 19:07:58.057406   67086 retry.go:31] will retry after 289.97597ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0802 19:07:58.347997   67086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:07:58.361610   67086 pause.go:51] kubelet running: false
	I0802 19:07:58.361691   67086 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0802 19:07:58.376402   67086 retry.go:31] will retry after 236.527645ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0802 19:07:58.613884   67086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:07:58.629771   67086 pause.go:51] kubelet running: false
	I0802 19:07:58.629846   67086 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0802 19:07:58.643476   67086 retry.go:31] will retry after 715.275748ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0802 19:07:59.359237   67086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:07:59.373103   67086 pause.go:51] kubelet running: false
	I0802 19:07:59.373168   67086 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0802 19:07:59.386450   67086 retry.go:31] will retry after 484.819362ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0802 19:07:59.872260   67086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:07:59.886246   67086 pause.go:51] kubelet running: false
	I0802 19:07:59.886330   67086 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0802 19:07:59.902104   67086 out.go:177] 
	W0802 19:07:59.903741   67086 out.go:239] X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	W0802 19:07:59.903764   67086 out.go:239] * 
	* 
	W0802 19:07:59.906740   67086 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0802 19:07:59.908140   67086 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-linux-amd64 pause -p no-preload-407306 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306: exit status 2 (217.106509ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-407306 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p old-k8s-version-490984             | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504903       | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:53 UTC |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:45 UTC |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-198962             | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-198962                  | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-198962 image list                           | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-684611 | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | disable-driver-mounts-684611                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-757654            | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC | 02 Aug 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-757654                 | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:55 UTC | 02 Aug 24 19:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC | 02 Aug 24 19:07 UTC |
	| start   | -p auto-800809 --memory=3072                           | auto-800809                  | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | no-preload-407306 image list                           | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC | 02 Aug 24 19:07 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 19:07:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 19:07:52.387737   66688 out.go:291] Setting OutFile to fd 1 ...
	I0802 19:07:52.388015   66688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:07:52.388029   66688 out.go:304] Setting ErrFile to fd 2...
	I0802 19:07:52.388035   66688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:07:52.388298   66688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 19:07:52.388891   66688 out.go:298] Setting JSON to false
	I0802 19:07:52.389794   66688 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6616,"bootTime":1722619056,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 19:07:52.389852   66688 start.go:139] virtualization: kvm guest
	I0802 19:07:52.392168   66688 out.go:177] * [auto-800809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 19:07:52.393615   66688 notify.go:220] Checking for updates...
	I0802 19:07:52.393640   66688 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 19:07:52.395373   66688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 19:07:52.396717   66688 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:07:52.398086   66688 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:07:52.399452   66688 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 19:07:52.400729   66688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 19:07:52.402521   66688 config.go:182] Loaded profile config "default-k8s-diff-port-504903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:07:52.402614   66688 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:07:52.402703   66688 config.go:182] Loaded profile config "no-preload-407306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 19:07:52.402771   66688 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 19:07:52.439687   66688 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 19:07:52.440964   66688 start.go:297] selected driver: kvm2
	I0802 19:07:52.440978   66688 start.go:901] validating driver "kvm2" against <nil>
	I0802 19:07:52.440991   66688 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 19:07:52.441693   66688 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:07:52.441767   66688 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 19:07:52.457467   66688 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 19:07:52.457540   66688 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 19:07:52.457796   66688 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:07:52.457864   66688 cni.go:84] Creating CNI manager for ""
	I0802 19:07:52.457882   66688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:07:52.457893   66688 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 19:07:52.457980   66688 start.go:340] cluster config:
	{Name:auto-800809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:07:52.458099   66688 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:07:52.459694   66688 out.go:177] * Starting "auto-800809" primary control-plane node in "auto-800809" cluster
	I0802 19:07:52.460849   66688 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:07:52.460888   66688 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 19:07:52.460898   66688 cache.go:56] Caching tarball of preloaded images
	I0802 19:07:52.461002   66688 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 19:07:52.461012   66688 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 19:07:52.461103   66688 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/config.json ...
	I0802 19:07:52.461119   66688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/config.json: {Name:mkbc30c2051290c26315ed28bd3a600c251b421b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:07:52.461236   66688 start.go:360] acquireMachinesLock for auto-800809: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 19:07:52.461262   66688 start.go:364] duration metric: took 14.697µs to acquireMachinesLock for "auto-800809"
	I0802 19:07:52.461277   66688 start.go:93] Provisioning new machine with config: &{Name:auto-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:07:52.461334   66688 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 19:07:52.462966   66688 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 19:07:52.463156   66688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:07:52.463210   66688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:07:52.477585   66688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I0802 19:07:52.478056   66688 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:07:52.478625   66688 main.go:141] libmachine: Using API Version  1
	I0802 19:07:52.478658   66688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:07:52.479003   66688 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:07:52.479313   66688 main.go:141] libmachine: (auto-800809) Calling .GetMachineName
	I0802 19:07:52.479494   66688 main.go:141] libmachine: (auto-800809) Calling .DriverName
	I0802 19:07:52.479731   66688 start.go:159] libmachine.API.Create for "auto-800809" (driver="kvm2")
	I0802 19:07:52.479759   66688 client.go:168] LocalClient.Create starting
	I0802 19:07:52.479795   66688 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 19:07:52.479830   66688 main.go:141] libmachine: Decoding PEM data...
	I0802 19:07:52.479846   66688 main.go:141] libmachine: Parsing certificate...
	I0802 19:07:52.479899   66688 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 19:07:52.479921   66688 main.go:141] libmachine: Decoding PEM data...
	I0802 19:07:52.479934   66688 main.go:141] libmachine: Parsing certificate...
	I0802 19:07:52.479950   66688 main.go:141] libmachine: Running pre-create checks...
	I0802 19:07:52.479959   66688 main.go:141] libmachine: (auto-800809) Calling .PreCreateCheck
	I0802 19:07:52.480412   66688 main.go:141] libmachine: (auto-800809) Calling .GetConfigRaw
	I0802 19:07:52.480891   66688 main.go:141] libmachine: Creating machine...
	I0802 19:07:52.480905   66688 main.go:141] libmachine: (auto-800809) Calling .Create
	I0802 19:07:52.481085   66688 main.go:141] libmachine: (auto-800809) Creating KVM machine...
	I0802 19:07:52.482393   66688 main.go:141] libmachine: (auto-800809) DBG | found existing default KVM network
	I0802 19:07:52.483508   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:52.483379   66711 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:fd:87} reservation:<nil>}
	I0802 19:07:52.484593   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:52.484527   66711 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010ff90}
	I0802 19:07:52.484643   66688 main.go:141] libmachine: (auto-800809) DBG | created network xml: 
	I0802 19:07:52.484664   66688 main.go:141] libmachine: (auto-800809) DBG | <network>
	I0802 19:07:52.484676   66688 main.go:141] libmachine: (auto-800809) DBG |   <name>mk-auto-800809</name>
	I0802 19:07:52.484694   66688 main.go:141] libmachine: (auto-800809) DBG |   <dns enable='no'/>
	I0802 19:07:52.484703   66688 main.go:141] libmachine: (auto-800809) DBG |   
	I0802 19:07:52.484709   66688 main.go:141] libmachine: (auto-800809) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0802 19:07:52.484714   66688 main.go:141] libmachine: (auto-800809) DBG |     <dhcp>
	I0802 19:07:52.484719   66688 main.go:141] libmachine: (auto-800809) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0802 19:07:52.484730   66688 main.go:141] libmachine: (auto-800809) DBG |     </dhcp>
	I0802 19:07:52.484737   66688 main.go:141] libmachine: (auto-800809) DBG |   </ip>
	I0802 19:07:52.484743   66688 main.go:141] libmachine: (auto-800809) DBG |   
	I0802 19:07:52.484753   66688 main.go:141] libmachine: (auto-800809) DBG | </network>
	I0802 19:07:52.484772   66688 main.go:141] libmachine: (auto-800809) DBG | 
	I0802 19:07:52.490284   66688 main.go:141] libmachine: (auto-800809) DBG | trying to create private KVM network mk-auto-800809 192.168.50.0/24...
	I0802 19:07:52.559652   66688 main.go:141] libmachine: (auto-800809) DBG | private KVM network mk-auto-800809 192.168.50.0/24 created
	I0802 19:07:52.559749   66688 main.go:141] libmachine: (auto-800809) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809 ...
	I0802 19:07:52.559777   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:52.559532   66711 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:07:52.559820   66688 main.go:141] libmachine: (auto-800809) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 19:07:52.559849   66688 main.go:141] libmachine: (auto-800809) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 19:07:52.828577   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:52.828432   66711 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809/id_rsa...
	I0802 19:07:53.100383   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:53.100240   66711 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809/auto-800809.rawdisk...
	I0802 19:07:53.100408   66688 main.go:141] libmachine: (auto-800809) DBG | Writing magic tar header
	I0802 19:07:53.100420   66688 main.go:141] libmachine: (auto-800809) DBG | Writing SSH key tar header
	I0802 19:07:53.100432   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:53.100372   66711 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809 ...
	I0802 19:07:53.100541   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809
	I0802 19:07:53.100568   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809 (perms=drwx------)
	I0802 19:07:53.100590   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 19:07:53.100597   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 19:07:53.100609   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 19:07:53.100620   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 19:07:53.100632   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 19:07:53.100644   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 19:07:53.100656   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:07:53.100683   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 19:07:53.100702   66688 main.go:141] libmachine: (auto-800809) Creating domain...
	I0802 19:07:53.100708   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 19:07:53.100722   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins
	I0802 19:07:53.100733   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home
	I0802 19:07:53.100755   66688 main.go:141] libmachine: (auto-800809) DBG | Skipping /home - not owner
	I0802 19:07:53.101953   66688 main.go:141] libmachine: (auto-800809) define libvirt domain using xml: 
	I0802 19:07:53.101972   66688 main.go:141] libmachine: (auto-800809) <domain type='kvm'>
	I0802 19:07:53.101983   66688 main.go:141] libmachine: (auto-800809)   <name>auto-800809</name>
	I0802 19:07:53.101990   66688 main.go:141] libmachine: (auto-800809)   <memory unit='MiB'>3072</memory>
	I0802 19:07:53.101996   66688 main.go:141] libmachine: (auto-800809)   <vcpu>2</vcpu>
	I0802 19:07:53.102001   66688 main.go:141] libmachine: (auto-800809)   <features>
	I0802 19:07:53.102006   66688 main.go:141] libmachine: (auto-800809)     <acpi/>
	I0802 19:07:53.102010   66688 main.go:141] libmachine: (auto-800809)     <apic/>
	I0802 19:07:53.102026   66688 main.go:141] libmachine: (auto-800809)     <pae/>
	I0802 19:07:53.102033   66688 main.go:141] libmachine: (auto-800809)     
	I0802 19:07:53.102038   66688 main.go:141] libmachine: (auto-800809)   </features>
	I0802 19:07:53.102043   66688 main.go:141] libmachine: (auto-800809)   <cpu mode='host-passthrough'>
	I0802 19:07:53.102048   66688 main.go:141] libmachine: (auto-800809)   
	I0802 19:07:53.102053   66688 main.go:141] libmachine: (auto-800809)   </cpu>
	I0802 19:07:53.102060   66688 main.go:141] libmachine: (auto-800809)   <os>
	I0802 19:07:53.102064   66688 main.go:141] libmachine: (auto-800809)     <type>hvm</type>
	I0802 19:07:53.102070   66688 main.go:141] libmachine: (auto-800809)     <boot dev='cdrom'/>
	I0802 19:07:53.102074   66688 main.go:141] libmachine: (auto-800809)     <boot dev='hd'/>
	I0802 19:07:53.102083   66688 main.go:141] libmachine: (auto-800809)     <bootmenu enable='no'/>
	I0802 19:07:53.102093   66688 main.go:141] libmachine: (auto-800809)   </os>
	I0802 19:07:53.102101   66688 main.go:141] libmachine: (auto-800809)   <devices>
	I0802 19:07:53.102119   66688 main.go:141] libmachine: (auto-800809)     <disk type='file' device='cdrom'>
	I0802 19:07:53.102146   66688 main.go:141] libmachine: (auto-800809)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809/boot2docker.iso'/>
	I0802 19:07:53.102158   66688 main.go:141] libmachine: (auto-800809)       <target dev='hdc' bus='scsi'/>
	I0802 19:07:53.102165   66688 main.go:141] libmachine: (auto-800809)       <readonly/>
	I0802 19:07:53.102180   66688 main.go:141] libmachine: (auto-800809)     </disk>
	I0802 19:07:53.102216   66688 main.go:141] libmachine: (auto-800809)     <disk type='file' device='disk'>
	I0802 19:07:53.102252   66688 main.go:141] libmachine: (auto-800809)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 19:07:53.102270   66688 main.go:141] libmachine: (auto-800809)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809/auto-800809.rawdisk'/>
	I0802 19:07:53.102282   66688 main.go:141] libmachine: (auto-800809)       <target dev='hda' bus='virtio'/>
	I0802 19:07:53.102291   66688 main.go:141] libmachine: (auto-800809)     </disk>
	I0802 19:07:53.102329   66688 main.go:141] libmachine: (auto-800809)     <interface type='network'>
	I0802 19:07:53.102341   66688 main.go:141] libmachine: (auto-800809)       <source network='mk-auto-800809'/>
	I0802 19:07:53.102351   66688 main.go:141] libmachine: (auto-800809)       <model type='virtio'/>
	I0802 19:07:53.102360   66688 main.go:141] libmachine: (auto-800809)     </interface>
	I0802 19:07:53.102369   66688 main.go:141] libmachine: (auto-800809)     <interface type='network'>
	I0802 19:07:53.102378   66688 main.go:141] libmachine: (auto-800809)       <source network='default'/>
	I0802 19:07:53.102393   66688 main.go:141] libmachine: (auto-800809)       <model type='virtio'/>
	I0802 19:07:53.102408   66688 main.go:141] libmachine: (auto-800809)     </interface>
	I0802 19:07:53.102421   66688 main.go:141] libmachine: (auto-800809)     <serial type='pty'>
	I0802 19:07:53.102449   66688 main.go:141] libmachine: (auto-800809)       <target port='0'/>
	I0802 19:07:53.102471   66688 main.go:141] libmachine: (auto-800809)     </serial>
	I0802 19:07:53.102481   66688 main.go:141] libmachine: (auto-800809)     <console type='pty'>
	I0802 19:07:53.102496   66688 main.go:141] libmachine: (auto-800809)       <target type='serial' port='0'/>
	I0802 19:07:53.102505   66688 main.go:141] libmachine: (auto-800809)     </console>
	I0802 19:07:53.102512   66688 main.go:141] libmachine: (auto-800809)     <rng model='virtio'>
	I0802 19:07:53.102524   66688 main.go:141] libmachine: (auto-800809)       <backend model='random'>/dev/random</backend>
	I0802 19:07:53.102530   66688 main.go:141] libmachine: (auto-800809)     </rng>
	I0802 19:07:53.102538   66688 main.go:141] libmachine: (auto-800809)     
	I0802 19:07:53.102544   66688 main.go:141] libmachine: (auto-800809)     
	I0802 19:07:53.102553   66688 main.go:141] libmachine: (auto-800809)   </devices>
	I0802 19:07:53.102560   66688 main.go:141] libmachine: (auto-800809) </domain>
	I0802 19:07:53.102579   66688 main.go:141] libmachine: (auto-800809) 
	I0802 19:07:53.107607   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:b6:8c:58 in network default
	I0802 19:07:53.108280   66688 main.go:141] libmachine: (auto-800809) Ensuring networks are active...
	I0802 19:07:53.108308   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:53.109030   66688 main.go:141] libmachine: (auto-800809) Ensuring network default is active
	I0802 19:07:53.109435   66688 main.go:141] libmachine: (auto-800809) Ensuring network mk-auto-800809 is active
	I0802 19:07:53.109988   66688 main.go:141] libmachine: (auto-800809) Getting domain xml...
	I0802 19:07:53.110791   66688 main.go:141] libmachine: (auto-800809) Creating domain...
	I0802 19:07:54.373945   66688 main.go:141] libmachine: (auto-800809) Waiting to get IP...
	I0802 19:07:54.374783   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:54.375265   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:54.375288   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:54.375243   66711 retry.go:31] will retry after 253.632271ms: waiting for machine to come up
	I0802 19:07:54.631510   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:54.631970   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:54.632002   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:54.631923   66711 retry.go:31] will retry after 293.568033ms: waiting for machine to come up
	I0802 19:07:54.927553   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:54.928008   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:54.928039   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:54.927961   66711 retry.go:31] will retry after 420.301291ms: waiting for machine to come up
	I0802 19:07:55.349529   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:55.350041   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:55.350068   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:55.349995   66711 retry.go:31] will retry after 590.281836ms: waiting for machine to come up
	I0802 19:07:55.941711   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:55.942202   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:55.942226   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:55.942170   66711 retry.go:31] will retry after 462.407917ms: waiting for machine to come up
	I0802 19:07:56.406428   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:56.406955   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:56.406977   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:56.406921   66711 retry.go:31] will retry after 732.579031ms: waiting for machine to come up
	I0802 19:07:57.141191   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:57.141687   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:57.141717   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:57.141612   66711 retry.go:31] will retry after 1.100566497s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 02 18:49:43 minikube systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:43 minikube systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 02 18:49:51 no-preload-407306 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:51 no-preload-407306 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:00Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0802 19:08:00.687567     876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:08:00.689193     876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:08:00.690726     876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:08:00.692184     876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:08:00.693614     876 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 2 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052268] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038133] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.175966] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.956805] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +0.895840] overlayfs: failed to resolve '/var/lib/containers/storage/overlay/compat441482906/lower1': -2
	[  +0.695966] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 2 18:50] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> kernel <==
	 19:08:00 up 18 min,  0 users,  load average: 0.15, 0.03, 0.01
	Linux no-preload-407306 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 19:08:00.333055   67161 logs.go:273] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:00Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:00.365543   67161 logs.go:273] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:00Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:00.400026   67161 logs.go:273] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:00Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:00.431598   67161 logs.go:273] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:00Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:00.463682   67161 logs.go:273] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:00Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:00.496281   67161 logs.go:273] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:00Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:00.526845   67161 logs.go:273] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:00Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:00.558188   67161 logs.go:273] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:00Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:00Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:00Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306: exit status 2 (231.999121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "no-preload-407306" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407306 -n no-preload-407306: exit status 2 (233.927927ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-407306 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p old-k8s-version-490984             | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-504903       | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-504903 | jenkins | v1.33.1 | 02 Aug 24 18:44 UTC | 02 Aug 24 18:53 UTC |
	|         | default-k8s-diff-port-504903                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-132946                           | kubernetes-upgrade-132946    | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:45 UTC |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:45 UTC | 02 Aug 24 18:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-198962             | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:49 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-198962                  | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-198962 --memory=2200 --alsologtostderr   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                              |         |         |                     |                     |
	| image   | newest-cni-198962 image list                           | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p newest-cni-198962                                   | newest-cni-198962            | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	| delete  | -p                                                     | disable-driver-mounts-684611 | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:50 UTC |
	|         | disable-driver-mounts-684611                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:50 UTC | 02 Aug 24 18:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-757654            | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC | 02 Aug 24 18:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-757654                 | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-757654                                  | embed-certs-757654           | jenkins | v1.33.1 | 02 Aug 24 18:55 UTC | 02 Aug 24 19:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-490984                              | old-k8s-version-490984       | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC | 02 Aug 24 19:07 UTC |
	| start   | -p auto-800809 --memory=3072                           | auto-800809                  | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | no-preload-407306 image list                           | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC | 02 Aug 24 19:07 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-407306                                   | no-preload-407306            | jenkins | v1.33.1 | 02 Aug 24 19:07 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 19:07:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 19:07:52.387737   66688 out.go:291] Setting OutFile to fd 1 ...
	I0802 19:07:52.388015   66688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:07:52.388029   66688 out.go:304] Setting ErrFile to fd 2...
	I0802 19:07:52.388035   66688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:07:52.388298   66688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 19:07:52.388891   66688 out.go:298] Setting JSON to false
	I0802 19:07:52.389794   66688 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6616,"bootTime":1722619056,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 19:07:52.389852   66688 start.go:139] virtualization: kvm guest
	I0802 19:07:52.392168   66688 out.go:177] * [auto-800809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 19:07:52.393615   66688 notify.go:220] Checking for updates...
	I0802 19:07:52.393640   66688 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 19:07:52.395373   66688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 19:07:52.396717   66688 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:07:52.398086   66688 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:07:52.399452   66688 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 19:07:52.400729   66688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 19:07:52.402521   66688 config.go:182] Loaded profile config "default-k8s-diff-port-504903": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:07:52.402614   66688 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:07:52.402703   66688 config.go:182] Loaded profile config "no-preload-407306": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0802 19:07:52.402771   66688 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 19:07:52.439687   66688 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 19:07:52.440964   66688 start.go:297] selected driver: kvm2
	I0802 19:07:52.440978   66688 start.go:901] validating driver "kvm2" against <nil>
	I0802 19:07:52.440991   66688 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 19:07:52.441693   66688 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:07:52.441767   66688 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 19:07:52.457467   66688 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 19:07:52.457540   66688 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 19:07:52.457796   66688 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:07:52.457864   66688 cni.go:84] Creating CNI manager for ""
	I0802 19:07:52.457882   66688 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 19:07:52.457893   66688 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 19:07:52.457980   66688 start.go:340] cluster config:
	{Name:auto-800809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:07:52.458099   66688 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:07:52.459694   66688 out.go:177] * Starting "auto-800809" primary control-plane node in "auto-800809" cluster
	I0802 19:07:52.460849   66688 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:07:52.460888   66688 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 19:07:52.460898   66688 cache.go:56] Caching tarball of preloaded images
	I0802 19:07:52.461002   66688 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 19:07:52.461012   66688 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 19:07:52.461103   66688 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/config.json ...
	I0802 19:07:52.461119   66688 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/config.json: {Name:mkbc30c2051290c26315ed28bd3a600c251b421b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:07:52.461236   66688 start.go:360] acquireMachinesLock for auto-800809: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 19:07:52.461262   66688 start.go:364] duration metric: took 14.697µs to acquireMachinesLock for "auto-800809"
	I0802 19:07:52.461277   66688 start.go:93] Provisioning new machine with config: &{Name:auto-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:07:52.461334   66688 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 19:07:52.462966   66688 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 19:07:52.463156   66688 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:07:52.463210   66688 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:07:52.477585   66688 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I0802 19:07:52.478056   66688 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:07:52.478625   66688 main.go:141] libmachine: Using API Version  1
	I0802 19:07:52.478658   66688 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:07:52.479003   66688 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:07:52.479313   66688 main.go:141] libmachine: (auto-800809) Calling .GetMachineName
	I0802 19:07:52.479494   66688 main.go:141] libmachine: (auto-800809) Calling .DriverName
	I0802 19:07:52.479731   66688 start.go:159] libmachine.API.Create for "auto-800809" (driver="kvm2")
	I0802 19:07:52.479759   66688 client.go:168] LocalClient.Create starting
	I0802 19:07:52.479795   66688 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 19:07:52.479830   66688 main.go:141] libmachine: Decoding PEM data...
	I0802 19:07:52.479846   66688 main.go:141] libmachine: Parsing certificate...
	I0802 19:07:52.479899   66688 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 19:07:52.479921   66688 main.go:141] libmachine: Decoding PEM data...
	I0802 19:07:52.479934   66688 main.go:141] libmachine: Parsing certificate...
	I0802 19:07:52.479950   66688 main.go:141] libmachine: Running pre-create checks...
	I0802 19:07:52.479959   66688 main.go:141] libmachine: (auto-800809) Calling .PreCreateCheck
	I0802 19:07:52.480412   66688 main.go:141] libmachine: (auto-800809) Calling .GetConfigRaw
	I0802 19:07:52.480891   66688 main.go:141] libmachine: Creating machine...
	I0802 19:07:52.480905   66688 main.go:141] libmachine: (auto-800809) Calling .Create
	I0802 19:07:52.481085   66688 main.go:141] libmachine: (auto-800809) Creating KVM machine...
	I0802 19:07:52.482393   66688 main.go:141] libmachine: (auto-800809) DBG | found existing default KVM network
	I0802 19:07:52.483508   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:52.483379   66711 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:13:fd:87} reservation:<nil>}
	I0802 19:07:52.484593   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:52.484527   66711 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010ff90}
	I0802 19:07:52.484643   66688 main.go:141] libmachine: (auto-800809) DBG | created network xml: 
	I0802 19:07:52.484664   66688 main.go:141] libmachine: (auto-800809) DBG | <network>
	I0802 19:07:52.484676   66688 main.go:141] libmachine: (auto-800809) DBG |   <name>mk-auto-800809</name>
	I0802 19:07:52.484694   66688 main.go:141] libmachine: (auto-800809) DBG |   <dns enable='no'/>
	I0802 19:07:52.484703   66688 main.go:141] libmachine: (auto-800809) DBG |   
	I0802 19:07:52.484709   66688 main.go:141] libmachine: (auto-800809) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0802 19:07:52.484714   66688 main.go:141] libmachine: (auto-800809) DBG |     <dhcp>
	I0802 19:07:52.484719   66688 main.go:141] libmachine: (auto-800809) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0802 19:07:52.484730   66688 main.go:141] libmachine: (auto-800809) DBG |     </dhcp>
	I0802 19:07:52.484737   66688 main.go:141] libmachine: (auto-800809) DBG |   </ip>
	I0802 19:07:52.484743   66688 main.go:141] libmachine: (auto-800809) DBG |   
	I0802 19:07:52.484753   66688 main.go:141] libmachine: (auto-800809) DBG | </network>
	I0802 19:07:52.484772   66688 main.go:141] libmachine: (auto-800809) DBG | 
	I0802 19:07:52.490284   66688 main.go:141] libmachine: (auto-800809) DBG | trying to create private KVM network mk-auto-800809 192.168.50.0/24...
	I0802 19:07:52.559652   66688 main.go:141] libmachine: (auto-800809) DBG | private KVM network mk-auto-800809 192.168.50.0/24 created
	I0802 19:07:52.559749   66688 main.go:141] libmachine: (auto-800809) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809 ...
	I0802 19:07:52.559777   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:52.559532   66711 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:07:52.559820   66688 main.go:141] libmachine: (auto-800809) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 19:07:52.559849   66688 main.go:141] libmachine: (auto-800809) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 19:07:52.828577   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:52.828432   66711 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809/id_rsa...
	I0802 19:07:53.100383   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:53.100240   66711 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809/auto-800809.rawdisk...
	I0802 19:07:53.100408   66688 main.go:141] libmachine: (auto-800809) DBG | Writing magic tar header
	I0802 19:07:53.100420   66688 main.go:141] libmachine: (auto-800809) DBG | Writing SSH key tar header
	I0802 19:07:53.100432   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:53.100372   66711 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809 ...
	I0802 19:07:53.100541   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809
	I0802 19:07:53.100568   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809 (perms=drwx------)
	I0802 19:07:53.100590   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 19:07:53.100597   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 19:07:53.100609   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 19:07:53.100620   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 19:07:53.100632   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 19:07:53.100644   66688 main.go:141] libmachine: (auto-800809) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 19:07:53.100656   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:07:53.100683   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 19:07:53.100702   66688 main.go:141] libmachine: (auto-800809) Creating domain...
	I0802 19:07:53.100708   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 19:07:53.100722   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home/jenkins
	I0802 19:07:53.100733   66688 main.go:141] libmachine: (auto-800809) DBG | Checking permissions on dir: /home
	I0802 19:07:53.100755   66688 main.go:141] libmachine: (auto-800809) DBG | Skipping /home - not owner
	I0802 19:07:53.101953   66688 main.go:141] libmachine: (auto-800809) define libvirt domain using xml: 
	I0802 19:07:53.101972   66688 main.go:141] libmachine: (auto-800809) <domain type='kvm'>
	I0802 19:07:53.101983   66688 main.go:141] libmachine: (auto-800809)   <name>auto-800809</name>
	I0802 19:07:53.101990   66688 main.go:141] libmachine: (auto-800809)   <memory unit='MiB'>3072</memory>
	I0802 19:07:53.101996   66688 main.go:141] libmachine: (auto-800809)   <vcpu>2</vcpu>
	I0802 19:07:53.102001   66688 main.go:141] libmachine: (auto-800809)   <features>
	I0802 19:07:53.102006   66688 main.go:141] libmachine: (auto-800809)     <acpi/>
	I0802 19:07:53.102010   66688 main.go:141] libmachine: (auto-800809)     <apic/>
	I0802 19:07:53.102026   66688 main.go:141] libmachine: (auto-800809)     <pae/>
	I0802 19:07:53.102033   66688 main.go:141] libmachine: (auto-800809)     
	I0802 19:07:53.102038   66688 main.go:141] libmachine: (auto-800809)   </features>
	I0802 19:07:53.102043   66688 main.go:141] libmachine: (auto-800809)   <cpu mode='host-passthrough'>
	I0802 19:07:53.102048   66688 main.go:141] libmachine: (auto-800809)   
	I0802 19:07:53.102053   66688 main.go:141] libmachine: (auto-800809)   </cpu>
	I0802 19:07:53.102060   66688 main.go:141] libmachine: (auto-800809)   <os>
	I0802 19:07:53.102064   66688 main.go:141] libmachine: (auto-800809)     <type>hvm</type>
	I0802 19:07:53.102070   66688 main.go:141] libmachine: (auto-800809)     <boot dev='cdrom'/>
	I0802 19:07:53.102074   66688 main.go:141] libmachine: (auto-800809)     <boot dev='hd'/>
	I0802 19:07:53.102083   66688 main.go:141] libmachine: (auto-800809)     <bootmenu enable='no'/>
	I0802 19:07:53.102093   66688 main.go:141] libmachine: (auto-800809)   </os>
	I0802 19:07:53.102101   66688 main.go:141] libmachine: (auto-800809)   <devices>
	I0802 19:07:53.102119   66688 main.go:141] libmachine: (auto-800809)     <disk type='file' device='cdrom'>
	I0802 19:07:53.102146   66688 main.go:141] libmachine: (auto-800809)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809/boot2docker.iso'/>
	I0802 19:07:53.102158   66688 main.go:141] libmachine: (auto-800809)       <target dev='hdc' bus='scsi'/>
	I0802 19:07:53.102165   66688 main.go:141] libmachine: (auto-800809)       <readonly/>
	I0802 19:07:53.102180   66688 main.go:141] libmachine: (auto-800809)     </disk>
	I0802 19:07:53.102216   66688 main.go:141] libmachine: (auto-800809)     <disk type='file' device='disk'>
	I0802 19:07:53.102252   66688 main.go:141] libmachine: (auto-800809)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 19:07:53.102270   66688 main.go:141] libmachine: (auto-800809)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/auto-800809/auto-800809.rawdisk'/>
	I0802 19:07:53.102282   66688 main.go:141] libmachine: (auto-800809)       <target dev='hda' bus='virtio'/>
	I0802 19:07:53.102291   66688 main.go:141] libmachine: (auto-800809)     </disk>
	I0802 19:07:53.102329   66688 main.go:141] libmachine: (auto-800809)     <interface type='network'>
	I0802 19:07:53.102341   66688 main.go:141] libmachine: (auto-800809)       <source network='mk-auto-800809'/>
	I0802 19:07:53.102351   66688 main.go:141] libmachine: (auto-800809)       <model type='virtio'/>
	I0802 19:07:53.102360   66688 main.go:141] libmachine: (auto-800809)     </interface>
	I0802 19:07:53.102369   66688 main.go:141] libmachine: (auto-800809)     <interface type='network'>
	I0802 19:07:53.102378   66688 main.go:141] libmachine: (auto-800809)       <source network='default'/>
	I0802 19:07:53.102393   66688 main.go:141] libmachine: (auto-800809)       <model type='virtio'/>
	I0802 19:07:53.102408   66688 main.go:141] libmachine: (auto-800809)     </interface>
	I0802 19:07:53.102421   66688 main.go:141] libmachine: (auto-800809)     <serial type='pty'>
	I0802 19:07:53.102449   66688 main.go:141] libmachine: (auto-800809)       <target port='0'/>
	I0802 19:07:53.102471   66688 main.go:141] libmachine: (auto-800809)     </serial>
	I0802 19:07:53.102481   66688 main.go:141] libmachine: (auto-800809)     <console type='pty'>
	I0802 19:07:53.102496   66688 main.go:141] libmachine: (auto-800809)       <target type='serial' port='0'/>
	I0802 19:07:53.102505   66688 main.go:141] libmachine: (auto-800809)     </console>
	I0802 19:07:53.102512   66688 main.go:141] libmachine: (auto-800809)     <rng model='virtio'>
	I0802 19:07:53.102524   66688 main.go:141] libmachine: (auto-800809)       <backend model='random'>/dev/random</backend>
	I0802 19:07:53.102530   66688 main.go:141] libmachine: (auto-800809)     </rng>
	I0802 19:07:53.102538   66688 main.go:141] libmachine: (auto-800809)     
	I0802 19:07:53.102544   66688 main.go:141] libmachine: (auto-800809)     
	I0802 19:07:53.102553   66688 main.go:141] libmachine: (auto-800809)   </devices>
	I0802 19:07:53.102560   66688 main.go:141] libmachine: (auto-800809) </domain>
	I0802 19:07:53.102579   66688 main.go:141] libmachine: (auto-800809) 
	I0802 19:07:53.107607   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:b6:8c:58 in network default
	I0802 19:07:53.108280   66688 main.go:141] libmachine: (auto-800809) Ensuring networks are active...
	I0802 19:07:53.108308   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:53.109030   66688 main.go:141] libmachine: (auto-800809) Ensuring network default is active
	I0802 19:07:53.109435   66688 main.go:141] libmachine: (auto-800809) Ensuring network mk-auto-800809 is active
	I0802 19:07:53.109988   66688 main.go:141] libmachine: (auto-800809) Getting domain xml...
	I0802 19:07:53.110791   66688 main.go:141] libmachine: (auto-800809) Creating domain...
	I0802 19:07:54.373945   66688 main.go:141] libmachine: (auto-800809) Waiting to get IP...
	I0802 19:07:54.374783   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:54.375265   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:54.375288   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:54.375243   66711 retry.go:31] will retry after 253.632271ms: waiting for machine to come up
	I0802 19:07:54.631510   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:54.631970   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:54.632002   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:54.631923   66711 retry.go:31] will retry after 293.568033ms: waiting for machine to come up
	I0802 19:07:54.927553   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:54.928008   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:54.928039   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:54.927961   66711 retry.go:31] will retry after 420.301291ms: waiting for machine to come up
	I0802 19:07:55.349529   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:55.350041   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:55.350068   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:55.349995   66711 retry.go:31] will retry after 590.281836ms: waiting for machine to come up
	I0802 19:07:55.941711   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:55.942202   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:55.942226   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:55.942170   66711 retry.go:31] will retry after 462.407917ms: waiting for machine to come up
	I0802 19:07:56.406428   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:56.406955   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:56.406977   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:56.406921   66711 retry.go:31] will retry after 732.579031ms: waiting for machine to come up
	I0802 19:07:57.141191   66688 main.go:141] libmachine: (auto-800809) DBG | domain auto-800809 has defined MAC address 52:54:00:0b:be:b6 in network mk-auto-800809
	I0802 19:07:57.141687   66688 main.go:141] libmachine: (auto-800809) DBG | unable to find current IP address of domain auto-800809 in network mk-auto-800809
	I0802 19:07:57.141717   66688 main.go:141] libmachine: (auto-800809) DBG | I0802 19:07:57.141612   66711 retry.go:31] will retry after 1.100566497s: waiting for machine to come up
	
	
	==> CRI-O <==
	Aug 02 18:49:43 minikube systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:43 minikube systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	Aug 02 18:49:51 no-preload-407306 systemd[1]: Dependency failed for Container Runtime Interface for OCI (CRI-O).
	Aug 02 18:49:51 no-preload-407306 systemd[1]: crio.service: Job crio.service/start failed with result 'dependency'.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:01Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0802 19:08:01.834177     962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:08:01.835750     962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:08:01.837132     962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:08:01.838484     962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	E0802 19:08:01.839812     962 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp 127.0.0.1:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug 2 18:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052268] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038133] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.175966] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.956805] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +0.895840] overlayfs: failed to resolve '/var/lib/containers/storage/overlay/compat441482906/lower1': -2
	[  +0.695966] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 2 18:50] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> kernel <==
	 19:08:01 up 18 min,  0 users,  load average: 0.15, 0.03, 0.01
	Linux no-preload-407306 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	-- No entries --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0802 19:08:01.465091   67245 logs.go:273] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:01Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:01.494901   67245 logs.go:273] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:01Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:01.537491   67245 logs.go:273] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:01Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:01.568989   67245 logs.go:273] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:01Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:01.601654   67245 logs.go:273] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:01Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:01.632741   67245 logs.go:273] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:01Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:01.663071   67245 logs.go:273] Failed to list containers for "kindnet": crictl list: sudo crictl ps -a --quiet --name=kindnet: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:01Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	E0802 19:08:01.692110   67245 logs.go:273] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-02T19:08:01Z" level=warning msg="runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead."
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///run/crio/crio.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/crio/crio.sock: connect: no such file or directory\""
	time="2024-08-02T19:08:01Z" level=error msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""
	time="2024-08-02T19:08:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused\""

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407306 -n no-preload-407306: exit status 2 (230.689574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "no-preload-407306" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (4.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (376.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0802 19:14:35.309894   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/client.crt: no such file or directory
E0802 19:14:35.558578   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:14:38.119474   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:14:42.305609   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/client.crt: no such file or directory
E0802 19:14:43.240677   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:14:50.510609   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/client.crt: no such file or directory
E0802 19:14:53.481258   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:15:13.962077   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:15:14.261565   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 19:15:16.270252   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/client.crt: no such file or directory
E0802 19:15:45.760640   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:15:45.765951   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:15:45.776231   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:15:45.796520   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:15:45.836880   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:15:45.917261   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:15:46.077707   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:15:46.398383   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:15:47.039070   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:15:48.319806   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:15:50.880511   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:15:54.922793   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:15:56.001092   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:16:06.242087   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:16:09.055588   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/no-preload-407306/client.crt: no such file or directory
E0802 19:16:22.555687   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:22.560957   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:22.571190   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:22.591504   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:22.631805   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:22.712421   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:22.872867   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:23.193524   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:23.834574   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:25.115555   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:26.722214   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:16:27.676473   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:32.797231   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:36.740870   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/no-preload-407306/client.crt: no such file or directory
E0802 19:16:38.120480   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:38.125778   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:38.136061   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:38.156369   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:38.190632   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/client.crt: no such file or directory
E0802 19:16:38.196816   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:38.277164   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:38.437626   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:38.758120   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:39.399287   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:40.679853   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:43.038408   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:16:43.240840   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:48.361047   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:16:58.457236   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/client.crt: no such file or directory
E0802 19:16:58.601600   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:17:03.518694   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:17:06.667525   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/client.crt: no such file or directory
E0802 19:17:07.683177   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:17:16.843283   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:17:19.082407   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:17:26.146519   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/old-k8s-version-490984/client.crt: no such file or directory
E0802 19:17:34.351240   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/default-k8s-diff-port-504903/client.crt: no such file or directory
E0802 19:17:43.928330   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 19:17:44.479557   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:17:50.391287   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:17:50.396568   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:17:50.406854   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:17:50.427212   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:17:50.467567   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:17:50.547928   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:17:50.708074   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:17:51.028644   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:17:51.669710   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:17:52.950833   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:17:55.511694   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:18:00.043581   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:18:00.632492   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:18:10.873315   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:18:29.603402   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
E0802 19:18:31.138203   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:31.143464   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:31.153753   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:31.174049   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:31.214369   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:31.294726   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:31.354023   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:18:31.455260   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:31.775448   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:32.416558   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:33.697092   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:36.257466   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:41.378331   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:51.619157   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:18:54.347222   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/client.crt: no such file or directory
E0802 19:19:06.400387   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/custom-flannel-800809/client.crt: no such file or directory
E0802 19:19:12.099723   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:19:12.315184   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:19:21.963803   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/enable-default-cni-800809/client.crt: no such file or directory
E0802 19:19:22.031070   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/client.crt: no such file or directory
E0802 19:19:32.999986   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:19:53.060545   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: no such file or directory
E0802 19:19:57.307307   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 19:20:00.683487   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/kindnet-800809/client.crt: no such file or directory
E0802 19:20:14.261070   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 19:20:34.235441   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: no such file or directory
E0802 19:20:45.760824   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/calico-800809/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-757654 -n embed-certs-757654
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-02 19:20:49.253132628 +0000 UTC m=+6870.371300249
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-757654 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-757654 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.338µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-757654 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757654 -n embed-certs-757654
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-757654 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-757654 logs -n 25: (1.153615951s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-800809 sudo iptables                       | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo docker                         | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo cat                            | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo                                | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo find                           | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-800809 sudo crio                           | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-800809                                     | bridge-800809 | jenkins | v1.33.1 | 02 Aug 24 19:13 UTC | 02 Aug 24 19:13 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 19:11:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 19:11:48.992549   75193 out.go:291] Setting OutFile to fd 1 ...
	I0802 19:11:48.992698   75193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:11:48.992710   75193 out.go:304] Setting ErrFile to fd 2...
	I0802 19:11:48.992718   75193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 19:11:48.992987   75193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 19:11:48.993749   75193 out.go:298] Setting JSON to false
	I0802 19:11:48.995374   75193 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6853,"bootTime":1722619056,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 19:11:48.995459   75193 start.go:139] virtualization: kvm guest
	I0802 19:11:48.997722   75193 out.go:177] * [bridge-800809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 19:11:48.999182   75193 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 19:11:48.999201   75193 notify.go:220] Checking for updates...
	I0802 19:11:49.001741   75193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 19:11:49.003065   75193 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:11:49.004367   75193 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:11:49.005495   75193 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 19:11:49.006542   75193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 19:11:49.008196   75193 config.go:182] Loaded profile config "embed-certs-757654": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:11:49.008328   75193 config.go:182] Loaded profile config "enable-default-cni-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:11:49.008486   75193 config.go:182] Loaded profile config "flannel-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:11:49.008604   75193 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 19:11:49.048702   75193 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 19:11:49.050024   75193 start.go:297] selected driver: kvm2
	I0802 19:11:49.050039   75193 start.go:901] validating driver "kvm2" against <nil>
	I0802 19:11:49.050056   75193 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 19:11:49.050792   75193 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:11:49.050892   75193 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 19:11:49.068001   75193 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 19:11:49.068065   75193 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 19:11:49.068314   75193 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:11:49.068384   75193 cni.go:84] Creating CNI manager for "bridge"
	I0802 19:11:49.068399   75193 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 19:11:49.068479   75193 start.go:340] cluster config:
	{Name:bridge-800809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:11:49.068594   75193 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 19:11:49.071081   75193 out.go:177] * Starting "bridge-800809" primary control-plane node in "bridge-800809" cluster
	I0802 19:11:49.072198   75193 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:11:49.072237   75193 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 19:11:49.072249   75193 cache.go:56] Caching tarball of preloaded images
	I0802 19:11:49.072353   75193 preload.go:172] Found /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0802 19:11:49.072368   75193 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0802 19:11:49.072479   75193 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/config.json ...
	I0802 19:11:49.072498   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/config.json: {Name:mka48f260b1295818e6d1cbbba5525ad1155665e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:49.072633   75193 start.go:360] acquireMachinesLock for bridge-800809: {Name:mk16e1d881c947a412c8f092032e8ae7d93261d4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0802 19:11:50.415756   75193 start.go:364] duration metric: took 1.343099886s to acquireMachinesLock for "bridge-800809"
	I0802 19:11:50.415841   75193 start.go:93] Provisioning new machine with config: &{Name:bridge-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:bridge-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:11:50.415956   75193 start.go:125] createHost starting for "" (driver="kvm2")
	I0802 19:11:48.505148   73373 main.go:141] libmachine: (flannel-800809) DBG | Getting to WaitForSSH function...
	I0802 19:11:48.740982   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.741432   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:48.741476   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.741649   73373 main.go:141] libmachine: (flannel-800809) DBG | Using SSH client type: external
	I0802 19:11:48.741674   73373 main.go:141] libmachine: (flannel-800809) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa (-rw-------)
	I0802 19:11:48.741728   73373 main.go:141] libmachine: (flannel-800809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 19:11:48.741750   73373 main.go:141] libmachine: (flannel-800809) DBG | About to run SSH command:
	I0802 19:11:48.741767   73373 main.go:141] libmachine: (flannel-800809) DBG | exit 0
	I0802 19:11:48.871940   73373 main.go:141] libmachine: (flannel-800809) DBG | SSH cmd err, output: <nil>: 
	I0802 19:11:48.872213   73373 main.go:141] libmachine: (flannel-800809) KVM machine creation complete!
	I0802 19:11:48.872555   73373 main.go:141] libmachine: (flannel-800809) Calling .GetConfigRaw
	I0802 19:11:48.873126   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:48.873328   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:48.873502   73373 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 19:11:48.873516   73373 main.go:141] libmachine: (flannel-800809) Calling .GetState
	I0802 19:11:48.874786   73373 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 19:11:48.874798   73373 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 19:11:48.874804   73373 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 19:11:48.874812   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:48.877792   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.878157   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:48.878200   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.878321   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:48.878492   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:48.878652   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:48.878773   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:48.878955   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:48.879218   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:48.879237   73373 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 19:11:48.978740   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:11:48.978776   73373 main.go:141] libmachine: Detecting the provisioner...
	I0802 19:11:48.978786   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:48.981420   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.982001   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:48.982023   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:48.982464   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:48.982673   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:48.982853   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:48.983042   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:48.983246   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:48.983413   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:48.983423   73373 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 19:11:49.087531   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 19:11:49.087584   73373 main.go:141] libmachine: found compatible host: buildroot
	I0802 19:11:49.087594   73373 main.go:141] libmachine: Provisioning with buildroot...
	I0802 19:11:49.087601   73373 main.go:141] libmachine: (flannel-800809) Calling .GetMachineName
	I0802 19:11:49.087856   73373 buildroot.go:166] provisioning hostname "flannel-800809"
	I0802 19:11:49.087882   73373 main.go:141] libmachine: (flannel-800809) Calling .GetMachineName
	I0802 19:11:49.088025   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:49.091214   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.091587   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.091607   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.091769   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:49.091938   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.092107   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.092304   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:49.092463   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:49.092662   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:49.092677   73373 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-800809 && echo "flannel-800809" | sudo tee /etc/hostname
	I0802 19:11:49.211080   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-800809
	
	I0802 19:11:49.211128   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:49.214099   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.214556   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.214588   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.214776   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:49.214965   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.215157   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.215332   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:49.215492   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:49.215722   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:49.215739   73373 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-800809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-800809/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-800809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 19:11:49.328159   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:11:49.328206   73373 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 19:11:49.328228   73373 buildroot.go:174] setting up certificates
	I0802 19:11:49.328243   73373 provision.go:84] configureAuth start
	I0802 19:11:49.328261   73373 main.go:141] libmachine: (flannel-800809) Calling .GetMachineName
	I0802 19:11:49.328548   73373 main.go:141] libmachine: (flannel-800809) Calling .GetIP
	I0802 19:11:49.332031   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.332412   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.332440   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.332718   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:49.335690   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.336088   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.336119   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.336221   73373 provision.go:143] copyHostCerts
	I0802 19:11:49.336302   73373 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 19:11:49.336313   73373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 19:11:49.336382   73373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 19:11:49.336489   73373 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 19:11:49.336501   73373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 19:11:49.336551   73373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 19:11:49.336647   73373 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 19:11:49.336658   73373 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 19:11:49.336702   73373 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 19:11:49.336819   73373 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.flannel-800809 san=[127.0.0.1 192.168.50.5 flannel-800809 localhost minikube]
	I0802 19:11:49.754782   73373 provision.go:177] copyRemoteCerts
	I0802 19:11:49.754836   73373 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 19:11:49.754858   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:49.757834   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.758205   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.758235   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.758393   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:49.758596   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.758819   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:49.759000   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:11:49.841096   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 19:11:49.864112   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0802 19:11:49.885998   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 19:11:49.909204   73373 provision.go:87] duration metric: took 580.942306ms to configureAuth
	I0802 19:11:49.909230   73373 buildroot.go:189] setting minikube options for container-runtime
	I0802 19:11:49.909377   73373 config.go:182] Loaded profile config "flannel-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:11:49.909440   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:49.912361   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.912729   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:49.912750   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:49.912936   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:49.913138   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.913285   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:49.913419   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:49.913567   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:49.913721   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:49.913734   73373 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 19:11:50.185279   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 19:11:50.185307   73373 main.go:141] libmachine: Checking connection to Docker...
	I0802 19:11:50.185318   73373 main.go:141] libmachine: (flannel-800809) Calling .GetURL
	I0802 19:11:50.186620   73373 main.go:141] libmachine: (flannel-800809) DBG | Using libvirt version 6000000
	I0802 19:11:50.189062   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.189413   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.189448   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.189594   73373 main.go:141] libmachine: Docker is up and running!
	I0802 19:11:50.189609   73373 main.go:141] libmachine: Reticulating splines...
	I0802 19:11:50.189617   73373 client.go:171] duration metric: took 28.346480007s to LocalClient.Create
	I0802 19:11:50.189640   73373 start.go:167] duration metric: took 28.346547758s to libmachine.API.Create "flannel-800809"
	I0802 19:11:50.189651   73373 start.go:293] postStartSetup for "flannel-800809" (driver="kvm2")
	I0802 19:11:50.189664   73373 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 19:11:50.189696   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:50.189921   73373 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 19:11:50.189946   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:50.192114   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.192542   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.192572   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.192752   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:50.192938   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:50.193097   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:50.193227   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:11:50.273672   73373 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 19:11:50.277719   73373 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 19:11:50.277738   73373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 19:11:50.277796   73373 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 19:11:50.277884   73373 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 19:11:50.277977   73373 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 19:11:50.286547   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:11:50.309197   73373 start.go:296] duration metric: took 119.533267ms for postStartSetup
	I0802 19:11:50.309250   73373 main.go:141] libmachine: (flannel-800809) Calling .GetConfigRaw
	I0802 19:11:50.309820   73373 main.go:141] libmachine: (flannel-800809) Calling .GetIP
	I0802 19:11:50.312551   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.312939   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.312966   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.313253   73373 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/config.json ...
	I0802 19:11:50.313480   73373 start.go:128] duration metric: took 28.490461799s to createHost
	I0802 19:11:50.313505   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:50.315973   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.316275   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.316297   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.316444   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:50.316587   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:50.316723   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:50.316880   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:50.317045   73373 main.go:141] libmachine: Using SSH client type: native
	I0802 19:11:50.317236   73373 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I0802 19:11:50.317251   73373 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 19:11:50.415595   73373 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722625910.396656919
	
	I0802 19:11:50.415615   73373 fix.go:216] guest clock: 1722625910.396656919
	I0802 19:11:50.415623   73373 fix.go:229] Guest: 2024-08-02 19:11:50.396656919 +0000 UTC Remote: 2024-08-02 19:11:50.313494702 +0000 UTC m=+28.607265002 (delta=83.162217ms)
	I0802 19:11:50.415664   73373 fix.go:200] guest clock delta is within tolerance: 83.162217ms
	I0802 19:11:50.415674   73373 start.go:83] releasing machines lock for "flannel-800809", held for 28.592726479s
	I0802 19:11:50.415703   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:50.415948   73373 main.go:141] libmachine: (flannel-800809) Calling .GetIP
	I0802 19:11:50.418650   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.419014   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.419037   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.419202   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:50.419647   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:50.419805   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:11:50.419884   73373 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 19:11:50.419927   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:50.419986   73373 ssh_runner.go:195] Run: cat /version.json
	I0802 19:11:50.420015   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:11:50.422581   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.422928   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.422962   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.422986   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.423148   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:50.423341   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:50.423490   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:50.423498   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:50.423509   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:50.423626   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:11:50.423684   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:11:50.423837   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:11:50.423974   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:11:50.424137   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:11:50.533512   73373 ssh_runner.go:195] Run: systemctl --version
	I0802 19:11:50.539979   73373 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 19:11:50.706624   73373 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 19:11:50.712096   73373 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 19:11:50.712167   73373 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 19:11:50.727356   73373 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 19:11:50.727376   73373 start.go:495] detecting cgroup driver to use...
	I0802 19:11:50.727442   73373 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 19:11:50.747572   73373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 19:11:50.761645   73373 docker.go:217] disabling cri-docker service (if available) ...
	I0802 19:11:50.761702   73373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 19:11:50.775483   73373 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 19:11:50.788429   73373 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 19:11:50.905227   73373 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 19:11:51.076329   73373 docker.go:233] disabling docker service ...
	I0802 19:11:51.076406   73373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 19:11:51.091256   73373 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 19:11:51.104035   73373 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 19:11:51.224792   73373 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 19:11:51.341027   73373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 19:11:51.355045   73373 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 19:11:51.372582   73373 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 19:11:51.372641   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.382224   73373 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 19:11:51.382288   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.392268   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.402012   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.411898   73373 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 19:11:51.422311   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.432945   73373 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.450825   73373 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:11:51.460500   73373 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 19:11:51.470102   73373 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 19:11:51.470167   73373 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 19:11:51.483295   73373 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 19:11:51.492497   73373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:11:51.604098   73373 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 19:11:51.756727   73373 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 19:11:51.756799   73373 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 19:11:51.761534   73373 start.go:563] Will wait 60s for crictl version
	I0802 19:11:51.761594   73373 ssh_runner.go:195] Run: which crictl
	I0802 19:11:51.764994   73373 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 19:11:51.806688   73373 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 19:11:51.806766   73373 ssh_runner.go:195] Run: crio --version
	I0802 19:11:51.846603   73373 ssh_runner.go:195] Run: crio --version
	I0802 19:11:51.877815   73373 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 19:11:50.418096   75193 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0802 19:11:50.418330   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:11:50.418396   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:11:50.436086   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0802 19:11:50.436509   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:11:50.437139   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:11:50.437166   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:11:50.437596   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:11:50.437836   75193 main.go:141] libmachine: (bridge-800809) Calling .GetMachineName
	I0802 19:11:50.438023   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:11:50.438221   75193 start.go:159] libmachine.API.Create for "bridge-800809" (driver="kvm2")
	I0802 19:11:50.438251   75193 client.go:168] LocalClient.Create starting
	I0802 19:11:50.438282   75193 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem
	I0802 19:11:50.438323   75193 main.go:141] libmachine: Decoding PEM data...
	I0802 19:11:50.438342   75193 main.go:141] libmachine: Parsing certificate...
	I0802 19:11:50.438428   75193 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem
	I0802 19:11:50.438460   75193 main.go:141] libmachine: Decoding PEM data...
	I0802 19:11:50.438482   75193 main.go:141] libmachine: Parsing certificate...
	I0802 19:11:50.438513   75193 main.go:141] libmachine: Running pre-create checks...
	I0802 19:11:50.438526   75193 main.go:141] libmachine: (bridge-800809) Calling .PreCreateCheck
	I0802 19:11:50.438949   75193 main.go:141] libmachine: (bridge-800809) Calling .GetConfigRaw
	I0802 19:11:50.439422   75193 main.go:141] libmachine: Creating machine...
	I0802 19:11:50.439441   75193 main.go:141] libmachine: (bridge-800809) Calling .Create
	I0802 19:11:50.439584   75193 main.go:141] libmachine: (bridge-800809) Creating KVM machine...
	I0802 19:11:50.440897   75193 main.go:141] libmachine: (bridge-800809) DBG | found existing default KVM network
	I0802 19:11:50.442638   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:50.442487   75282 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000270100}
	I0802 19:11:50.442665   75193 main.go:141] libmachine: (bridge-800809) DBG | created network xml: 
	I0802 19:11:50.442679   75193 main.go:141] libmachine: (bridge-800809) DBG | <network>
	I0802 19:11:50.442688   75193 main.go:141] libmachine: (bridge-800809) DBG |   <name>mk-bridge-800809</name>
	I0802 19:11:50.442699   75193 main.go:141] libmachine: (bridge-800809) DBG |   <dns enable='no'/>
	I0802 19:11:50.442711   75193 main.go:141] libmachine: (bridge-800809) DBG |   
	I0802 19:11:50.442722   75193 main.go:141] libmachine: (bridge-800809) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0802 19:11:50.442736   75193 main.go:141] libmachine: (bridge-800809) DBG |     <dhcp>
	I0802 19:11:50.442750   75193 main.go:141] libmachine: (bridge-800809) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0802 19:11:50.442764   75193 main.go:141] libmachine: (bridge-800809) DBG |     </dhcp>
	I0802 19:11:50.442777   75193 main.go:141] libmachine: (bridge-800809) DBG |   </ip>
	I0802 19:11:50.442852   75193 main.go:141] libmachine: (bridge-800809) DBG |   
	I0802 19:11:50.442876   75193 main.go:141] libmachine: (bridge-800809) DBG | </network>
	I0802 19:11:50.442893   75193 main.go:141] libmachine: (bridge-800809) DBG | 
	I0802 19:11:50.448692   75193 main.go:141] libmachine: (bridge-800809) DBG | trying to create private KVM network mk-bridge-800809 192.168.39.0/24...
	I0802 19:11:50.526128   75193 main.go:141] libmachine: (bridge-800809) DBG | private KVM network mk-bridge-800809 192.168.39.0/24 created
	I0802 19:11:50.526180   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:50.526103   75282 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:11:50.526226   75193 main.go:141] libmachine: (bridge-800809) Setting up store path in /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809 ...
	I0802 19:11:50.526296   75193 main.go:141] libmachine: (bridge-800809) Building disk image from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 19:11:50.526397   75193 main.go:141] libmachine: (bridge-800809) Downloading /home/jenkins/minikube-integration/19355-5397/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso...
	I0802 19:11:50.782637   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:50.782507   75282 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa...
	I0802 19:11:50.989227   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:50.989067   75282 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/bridge-800809.rawdisk...
	I0802 19:11:50.989265   75193 main.go:141] libmachine: (bridge-800809) DBG | Writing magic tar header
	I0802 19:11:50.989349   75193 main.go:141] libmachine: (bridge-800809) DBG | Writing SSH key tar header
	I0802 19:11:50.989388   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809 (perms=drwx------)
	I0802 19:11:50.989414   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:50.989194   75282 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809 ...
	I0802 19:11:50.989426   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube/machines (perms=drwxr-xr-x)
	I0802 19:11:50.989444   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397/.minikube (perms=drwxr-xr-x)
	I0802 19:11:50.989457   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins/minikube-integration/19355-5397 (perms=drwxrwxr-x)
	I0802 19:11:50.989474   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0802 19:11:50.989493   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809
	I0802 19:11:50.989506   75193 main.go:141] libmachine: (bridge-800809) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0802 19:11:50.989520   75193 main.go:141] libmachine: (bridge-800809) Creating domain...
	I0802 19:11:50.989539   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube/machines
	I0802 19:11:50.989552   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 19:11:50.989564   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19355-5397
	I0802 19:11:50.989579   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0802 19:11:50.989592   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home/jenkins
	I0802 19:11:50.989607   75193 main.go:141] libmachine: (bridge-800809) DBG | Checking permissions on dir: /home
	I0802 19:11:50.989621   75193 main.go:141] libmachine: (bridge-800809) DBG | Skipping /home - not owner
	I0802 19:11:50.990738   75193 main.go:141] libmachine: (bridge-800809) define libvirt domain using xml: 
	I0802 19:11:50.990764   75193 main.go:141] libmachine: (bridge-800809) <domain type='kvm'>
	I0802 19:11:50.990795   75193 main.go:141] libmachine: (bridge-800809)   <name>bridge-800809</name>
	I0802 19:11:50.990818   75193 main.go:141] libmachine: (bridge-800809)   <memory unit='MiB'>3072</memory>
	I0802 19:11:50.990832   75193 main.go:141] libmachine: (bridge-800809)   <vcpu>2</vcpu>
	I0802 19:11:50.990842   75193 main.go:141] libmachine: (bridge-800809)   <features>
	I0802 19:11:50.990854   75193 main.go:141] libmachine: (bridge-800809)     <acpi/>
	I0802 19:11:50.990863   75193 main.go:141] libmachine: (bridge-800809)     <apic/>
	I0802 19:11:50.990874   75193 main.go:141] libmachine: (bridge-800809)     <pae/>
	I0802 19:11:50.990890   75193 main.go:141] libmachine: (bridge-800809)     
	I0802 19:11:50.990900   75193 main.go:141] libmachine: (bridge-800809)   </features>
	I0802 19:11:50.990907   75193 main.go:141] libmachine: (bridge-800809)   <cpu mode='host-passthrough'>
	I0802 19:11:50.990915   75193 main.go:141] libmachine: (bridge-800809)   
	I0802 19:11:50.990921   75193 main.go:141] libmachine: (bridge-800809)   </cpu>
	I0802 19:11:50.990929   75193 main.go:141] libmachine: (bridge-800809)   <os>
	I0802 19:11:50.990951   75193 main.go:141] libmachine: (bridge-800809)     <type>hvm</type>
	I0802 19:11:50.990964   75193 main.go:141] libmachine: (bridge-800809)     <boot dev='cdrom'/>
	I0802 19:11:50.990977   75193 main.go:141] libmachine: (bridge-800809)     <boot dev='hd'/>
	I0802 19:11:50.990989   75193 main.go:141] libmachine: (bridge-800809)     <bootmenu enable='no'/>
	I0802 19:11:50.990998   75193 main.go:141] libmachine: (bridge-800809)   </os>
	I0802 19:11:50.991004   75193 main.go:141] libmachine: (bridge-800809)   <devices>
	I0802 19:11:50.991014   75193 main.go:141] libmachine: (bridge-800809)     <disk type='file' device='cdrom'>
	I0802 19:11:50.991025   75193 main.go:141] libmachine: (bridge-800809)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/boot2docker.iso'/>
	I0802 19:11:50.991035   75193 main.go:141] libmachine: (bridge-800809)       <target dev='hdc' bus='scsi'/>
	I0802 19:11:50.991043   75193 main.go:141] libmachine: (bridge-800809)       <readonly/>
	I0802 19:11:50.991052   75193 main.go:141] libmachine: (bridge-800809)     </disk>
	I0802 19:11:50.991061   75193 main.go:141] libmachine: (bridge-800809)     <disk type='file' device='disk'>
	I0802 19:11:50.991072   75193 main.go:141] libmachine: (bridge-800809)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0802 19:11:50.991087   75193 main.go:141] libmachine: (bridge-800809)       <source file='/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/bridge-800809.rawdisk'/>
	I0802 19:11:50.991097   75193 main.go:141] libmachine: (bridge-800809)       <target dev='hda' bus='virtio'/>
	I0802 19:11:50.991123   75193 main.go:141] libmachine: (bridge-800809)     </disk>
	I0802 19:11:50.991135   75193 main.go:141] libmachine: (bridge-800809)     <interface type='network'>
	I0802 19:11:50.991148   75193 main.go:141] libmachine: (bridge-800809)       <source network='mk-bridge-800809'/>
	I0802 19:11:50.991158   75193 main.go:141] libmachine: (bridge-800809)       <model type='virtio'/>
	I0802 19:11:50.991168   75193 main.go:141] libmachine: (bridge-800809)     </interface>
	I0802 19:11:50.991178   75193 main.go:141] libmachine: (bridge-800809)     <interface type='network'>
	I0802 19:11:50.991191   75193 main.go:141] libmachine: (bridge-800809)       <source network='default'/>
	I0802 19:11:50.991201   75193 main.go:141] libmachine: (bridge-800809)       <model type='virtio'/>
	I0802 19:11:50.991209   75193 main.go:141] libmachine: (bridge-800809)     </interface>
	I0802 19:11:50.991226   75193 main.go:141] libmachine: (bridge-800809)     <serial type='pty'>
	I0802 19:11:50.991258   75193 main.go:141] libmachine: (bridge-800809)       <target port='0'/>
	I0802 19:11:50.991280   75193 main.go:141] libmachine: (bridge-800809)     </serial>
	I0802 19:11:50.991291   75193 main.go:141] libmachine: (bridge-800809)     <console type='pty'>
	I0802 19:11:50.991302   75193 main.go:141] libmachine: (bridge-800809)       <target type='serial' port='0'/>
	I0802 19:11:50.991313   75193 main.go:141] libmachine: (bridge-800809)     </console>
	I0802 19:11:50.991325   75193 main.go:141] libmachine: (bridge-800809)     <rng model='virtio'>
	I0802 19:11:50.991335   75193 main.go:141] libmachine: (bridge-800809)       <backend model='random'>/dev/random</backend>
	I0802 19:11:50.991349   75193 main.go:141] libmachine: (bridge-800809)     </rng>
	I0802 19:11:50.991363   75193 main.go:141] libmachine: (bridge-800809)     
	I0802 19:11:50.991386   75193 main.go:141] libmachine: (bridge-800809)     
	I0802 19:11:50.991395   75193 main.go:141] libmachine: (bridge-800809)   </devices>
	I0802 19:11:50.991400   75193 main.go:141] libmachine: (bridge-800809) </domain>
	I0802 19:11:50.991410   75193 main.go:141] libmachine: (bridge-800809) 
	I0802 19:11:50.996626   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:37:ac:11 in network default
	I0802 19:11:50.997257   75193 main.go:141] libmachine: (bridge-800809) Ensuring networks are active...
	I0802 19:11:50.997278   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:50.998158   75193 main.go:141] libmachine: (bridge-800809) Ensuring network default is active
	I0802 19:11:50.998535   75193 main.go:141] libmachine: (bridge-800809) Ensuring network mk-bridge-800809 is active
	I0802 19:11:50.999134   75193 main.go:141] libmachine: (bridge-800809) Getting domain xml...
	I0802 19:11:50.999961   75193 main.go:141] libmachine: (bridge-800809) Creating domain...
	I0802 19:11:52.379816   75193 main.go:141] libmachine: (bridge-800809) Waiting to get IP...
	I0802 19:11:52.381036   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:52.381666   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:52.381725   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:52.381644   75282 retry.go:31] will retry after 248.454118ms: waiting for machine to come up
	I0802 19:11:52.632358   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:52.632962   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:52.632984   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:52.632924   75282 retry.go:31] will retry after 331.963102ms: waiting for machine to come up
	I0802 19:11:52.966675   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:52.967280   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:52.967328   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:52.967231   75282 retry.go:31] will retry after 302.105474ms: waiting for machine to come up
	I0802 19:11:53.270669   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:53.271269   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:53.271317   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:53.271216   75282 retry.go:31] will retry after 426.086034ms: waiting for machine to come up
	I0802 19:11:53.698800   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:53.699493   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:53.699522   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:53.699444   75282 retry.go:31] will retry after 739.113839ms: waiting for machine to come up
	I0802 19:11:51.879036   73373 main.go:141] libmachine: (flannel-800809) Calling .GetIP
	I0802 19:11:51.882396   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:51.882931   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:11:51.882958   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:11:51.883240   73373 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0802 19:11:51.887474   73373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:11:51.899509   73373 kubeadm.go:883] updating cluster {Name:flannel-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:flannel-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 19:11:51.899651   73373 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:11:51.899712   73373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:11:51.930905   73373 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 19:11:51.930990   73373 ssh_runner.go:195] Run: which lz4
	I0802 19:11:51.934836   73373 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 19:11:51.938936   73373 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 19:11:51.938969   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 19:11:53.236069   73373 crio.go:462] duration metric: took 1.301263129s to copy over tarball
	I0802 19:11:53.236155   73373 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 19:11:55.689790   73373 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.453606792s)
	I0802 19:11:55.689815   73373 crio.go:469] duration metric: took 2.453709131s to extract the tarball
	I0802 19:11:55.689824   73373 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 19:11:55.741095   73373 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:11:55.790173   73373 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 19:11:55.790194   73373 cache_images.go:84] Images are preloaded, skipping loading
	I0802 19:11:55.790204   73373 kubeadm.go:934] updating node { 192.168.50.5 8443 v1.30.3 crio true true} ...
	I0802 19:11:55.790341   73373 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-800809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:flannel-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0802 19:11:55.790434   73373 ssh_runner.go:195] Run: crio config
	I0802 19:11:55.846997   73373 cni.go:84] Creating CNI manager for "flannel"
	I0802 19:11:55.847028   73373 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 19:11:55.847061   73373 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-800809 NodeName:flannel-800809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 19:11:55.847276   73373 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-800809"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 19:11:55.847355   73373 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 19:11:55.859933   73373 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 19:11:55.860000   73373 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 19:11:55.869006   73373 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0802 19:11:55.885489   73373 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 19:11:55.902373   73373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2152 bytes)
	I0802 19:11:55.919889   73373 ssh_runner.go:195] Run: grep 192.168.50.5	control-plane.minikube.internal$ /etc/hosts
	I0802 19:11:55.923860   73373 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:11:55.936457   73373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:11:56.078744   73373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:11:56.098162   73373 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809 for IP: 192.168.50.5
	I0802 19:11:56.098185   73373 certs.go:194] generating shared ca certs ...
	I0802 19:11:56.098207   73373 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:56.098390   73373 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 19:11:56.098451   73373 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 19:11:56.098464   73373 certs.go:256] generating profile certs ...
	I0802 19:11:56.098560   73373 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.key
	I0802 19:11:56.098585   73373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt with IP's: []
	I0802 19:11:56.825487   73373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt ...
	I0802 19:11:56.825521   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.crt: {Name:mk8798632b721acc602eb532cc80981f8a8eac6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:56.825708   73373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.key ...
	I0802 19:11:56.825722   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/client.key: {Name:mk887e5f10903f5893b7d910b7823cb576fc4901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:56.825817   73373 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key.6e650993
	I0802 19:11:56.825837   73373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt.6e650993 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.5]
	I0802 19:11:57.040275   73373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt.6e650993 ...
	I0802 19:11:57.040301   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt.6e650993: {Name:mk322863c77775a6ddc0c85a55db52704046ff51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:57.040461   73373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key.6e650993 ...
	I0802 19:11:57.040475   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key.6e650993: {Name:mkd172b05439072f3504d2c7474093f97a63f63a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:57.040555   73373 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt.6e650993 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt
	I0802 19:11:57.040652   73373 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key.6e650993 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key
	I0802 19:11:57.040712   73373 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.key
	I0802 19:11:57.040728   73373 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.crt with IP's: []
	I0802 19:11:57.374226   73373 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.crt ...
	I0802 19:11:57.374253   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.crt: {Name:mk67d9b0bfee7da40f1bc144fab49d9c45f053a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:57.374421   73373 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.key ...
	I0802 19:11:57.374435   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.key: {Name:mk6d68fb8dc1fc3d1d498ffeff8a3d201d7e64f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:11:57.374623   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 19:11:57.374666   73373 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 19:11:57.374681   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 19:11:57.374716   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 19:11:57.374750   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 19:11:57.374782   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 19:11:57.374836   73373 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:11:57.375444   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 19:11:57.404539   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 19:11:57.432838   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 19:11:57.457749   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 19:11:57.481495   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0802 19:11:57.514589   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 19:11:57.552197   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 19:11:57.577361   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/flannel-800809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0802 19:11:57.600577   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 19:11:57.623930   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 19:11:57.651787   73373 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 19:11:57.679626   73373 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 19:11:57.700087   73373 ssh_runner.go:195] Run: openssl version
	I0802 19:11:57.705923   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 19:11:57.716893   73373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 19:11:57.721638   73373 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 19:11:57.721689   73373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 19:11:57.728102   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 19:11:57.744312   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 19:11:57.755979   73373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 19:11:57.761595   73373 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 19:11:57.761647   73373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 19:11:57.767747   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 19:11:57.781052   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 19:11:57.792184   73373 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:11:57.796625   73373 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:11:57.796674   73373 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:11:57.805743   73373 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 19:11:57.819999   73373 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 19:11:57.825075   73373 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 19:11:57.825139   73373 kubeadm.go:392] StartCluster: {Name:flannel-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:flannel-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:11:57.825230   73373 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 19:11:57.825310   73373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 19:11:57.863560   73373 cri.go:89] found id: ""
	I0802 19:11:57.863635   73373 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 19:11:57.875742   73373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 19:11:57.885379   73373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 19:11:57.895723   73373 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 19:11:57.895745   73373 kubeadm.go:157] found existing configuration files:
	
	I0802 19:11:57.895808   73373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 19:11:57.906310   73373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 19:11:57.906376   73373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 19:11:57.919145   73373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 19:11:57.929876   73373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 19:11:57.929941   73373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 19:11:57.940648   73373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 19:11:57.950387   73373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 19:11:57.950445   73373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 19:11:57.960449   73373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 19:11:57.969472   73373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 19:11:57.969531   73373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 19:11:57.978790   73373 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 19:11:58.048446   73373 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 19:11:58.048623   73373 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 19:11:58.201701   73373 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 19:11:58.201911   73373 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 19:11:58.202081   73373 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 19:11:58.460575   73373 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 19:11:54.440156   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:54.440733   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:54.440762   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:54.440676   75282 retry.go:31] will retry after 832.997741ms: waiting for machine to come up
	I0802 19:11:55.275698   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:55.276162   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:55.276204   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:55.276130   75282 retry.go:31] will retry after 800.164807ms: waiting for machine to come up
	I0802 19:11:56.077594   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:56.078207   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:56.078241   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:56.078138   75282 retry.go:31] will retry after 952.401705ms: waiting for machine to come up
	I0802 19:11:57.032437   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:57.032961   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:57.032995   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:57.032916   75282 retry.go:31] will retry after 1.176859984s: waiting for machine to come up
	I0802 19:11:58.211447   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:11:58.211987   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:11:58.212018   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:11:58.211947   75282 retry.go:31] will retry after 2.284917552s: waiting for machine to come up
	I0802 19:11:58.585912   73373 out.go:204]   - Generating certificates and keys ...
	I0802 19:11:58.586065   73373 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 19:11:58.586173   73373 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 19:11:58.726767   73373 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 19:11:58.855822   73373 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 19:11:59.008917   73373 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 19:11:59.332965   73373 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 19:11:59.434770   73373 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 19:11:59.434956   73373 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-800809 localhost] and IPs [192.168.50.5 127.0.0.1 ::1]
	I0802 19:11:59.507754   73373 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 19:11:59.507906   73373 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-800809 localhost] and IPs [192.168.50.5 127.0.0.1 ::1]
	I0802 19:11:59.703753   73373 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 19:12:00.224289   73373 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 19:12:00.375902   73373 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 19:12:00.376036   73373 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 19:12:00.537256   73373 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 19:12:00.702586   73373 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 19:12:00.894124   73373 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 19:12:01.029850   73373 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 19:12:01.244850   73373 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 19:12:01.245753   73373 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 19:12:01.248598   73373 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 19:12:01.250534   73373 out.go:204]   - Booting up control plane ...
	I0802 19:12:01.250658   73373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 19:12:01.250758   73373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 19:12:01.253235   73373 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 19:12:01.274898   73373 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 19:12:01.275530   73373 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 19:12:01.275601   73373 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 19:12:01.422785   73373 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 19:12:01.422892   73373 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 19:12:00.498642   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:00.499198   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:12:00.499233   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:12:00.499130   75282 retry.go:31] will retry after 2.584473334s: waiting for machine to come up
	I0802 19:12:03.085072   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:03.085804   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:12:03.085842   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:12:03.085702   75282 retry.go:31] will retry after 2.321675283s: waiting for machine to come up
	I0802 19:12:02.427300   73373 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004045728s
	I0802 19:12:02.427405   73373 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 19:12:07.926160   73373 kubeadm.go:310] [api-check] The API server is healthy after 5.501769866s
	I0802 19:12:07.943545   73373 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 19:12:07.959330   73373 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 19:12:07.997482   73373 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 19:12:07.997710   73373 kubeadm.go:310] [mark-control-plane] Marking the node flannel-800809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 19:12:08.016986   73373 kubeadm.go:310] [bootstrap-token] Using token: kkupdq.9c0g512l5z6vxhyc
	I0802 19:12:05.724559   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:05.724971   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:12:05.724991   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:12:05.724945   75282 retry.go:31] will retry after 3.413268879s: waiting for machine to come up
	I0802 19:12:08.018257   73373 out.go:204]   - Configuring RBAC rules ...
	I0802 19:12:08.018395   73373 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 19:12:08.025386   73373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 19:12:08.035750   73373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 19:12:08.039019   73373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 19:12:08.044797   73373 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 19:12:08.049373   73373 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 19:12:08.332971   73373 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 19:12:08.769985   73373 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 19:12:09.332627   73373 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 19:12:09.333525   73373 kubeadm.go:310] 
	I0802 19:12:09.333606   73373 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 19:12:09.333617   73373 kubeadm.go:310] 
	I0802 19:12:09.333696   73373 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 19:12:09.333717   73373 kubeadm.go:310] 
	I0802 19:12:09.333765   73373 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 19:12:09.333840   73373 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 19:12:09.333894   73373 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 19:12:09.333900   73373 kubeadm.go:310] 
	I0802 19:12:09.333947   73373 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 19:12:09.333953   73373 kubeadm.go:310] 
	I0802 19:12:09.333991   73373 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 19:12:09.333999   73373 kubeadm.go:310] 
	I0802 19:12:09.334041   73373 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 19:12:09.334103   73373 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 19:12:09.334260   73373 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 19:12:09.334281   73373 kubeadm.go:310] 
	I0802 19:12:09.334413   73373 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 19:12:09.334527   73373 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 19:12:09.334540   73373 kubeadm.go:310] 
	I0802 19:12:09.334641   73373 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kkupdq.9c0g512l5z6vxhyc \
	I0802 19:12:09.334737   73373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 19:12:09.334771   73373 kubeadm.go:310] 	--control-plane 
	I0802 19:12:09.334780   73373 kubeadm.go:310] 
	I0802 19:12:09.334877   73373 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 19:12:09.334886   73373 kubeadm.go:310] 
	I0802 19:12:09.334999   73373 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kkupdq.9c0g512l5z6vxhyc \
	I0802 19:12:09.335133   73373 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 19:12:09.335349   73373 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 19:12:09.335379   73373 cni.go:84] Creating CNI manager for "flannel"
	I0802 19:12:09.337861   73373 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0802 19:12:09.339216   73373 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0802 19:12:09.344611   73373 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0802 19:12:09.344624   73373 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0802 19:12:09.362345   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0802 19:12:09.722212   73373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 19:12:09.722308   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:09.722311   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-800809 minikube.k8s.io/updated_at=2024_08_02T19_12_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=flannel-800809 minikube.k8s.io/primary=true
	I0802 19:12:09.914588   73373 ops.go:34] apiserver oom_adj: -16
	I0802 19:12:09.914662   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:10.415313   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:10.914921   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:11.415447   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:09.140426   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:09.140955   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find current IP address of domain bridge-800809 in network mk-bridge-800809
	I0802 19:12:09.140978   75193 main.go:141] libmachine: (bridge-800809) DBG | I0802 19:12:09.140905   75282 retry.go:31] will retry after 4.075349181s: waiting for machine to come up
	I0802 19:12:13.219679   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:13.220314   75193 main.go:141] libmachine: (bridge-800809) Found IP for machine: 192.168.39.217
	I0802 19:12:13.220343   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has current primary IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:13.220353   75193 main.go:141] libmachine: (bridge-800809) Reserving static IP address...
	I0802 19:12:13.220665   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find host DHCP lease matching {name: "bridge-800809", mac: "52:54:00:ca:09:00", ip: "192.168.39.217"} in network mk-bridge-800809
	I0802 19:12:13.297200   75193 main.go:141] libmachine: (bridge-800809) DBG | Getting to WaitForSSH function...
	I0802 19:12:13.297230   75193 main.go:141] libmachine: (bridge-800809) Reserved static IP address: 192.168.39.217
	I0802 19:12:13.297242   75193 main.go:141] libmachine: (bridge-800809) Waiting for SSH to be available...
	I0802 19:12:13.300545   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:13.300884   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809
	I0802 19:12:13.300913   75193 main.go:141] libmachine: (bridge-800809) DBG | unable to find defined IP address of network mk-bridge-800809 interface with MAC address 52:54:00:ca:09:00
	I0802 19:12:13.301043   75193 main.go:141] libmachine: (bridge-800809) DBG | Using SSH client type: external
	I0802 19:12:13.301071   75193 main.go:141] libmachine: (bridge-800809) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa (-rw-------)
	I0802 19:12:13.301119   75193 main.go:141] libmachine: (bridge-800809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 19:12:13.301136   75193 main.go:141] libmachine: (bridge-800809) DBG | About to run SSH command:
	I0802 19:12:13.301150   75193 main.go:141] libmachine: (bridge-800809) DBG | exit 0
	I0802 19:12:13.304679   75193 main.go:141] libmachine: (bridge-800809) DBG | SSH cmd err, output: exit status 255: 
	I0802 19:12:13.304705   75193 main.go:141] libmachine: (bridge-800809) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0802 19:12:13.304716   75193 main.go:141] libmachine: (bridge-800809) DBG | command : exit 0
	I0802 19:12:13.304728   75193 main.go:141] libmachine: (bridge-800809) DBG | err     : exit status 255
	I0802 19:12:13.304742   75193 main.go:141] libmachine: (bridge-800809) DBG | output  : 
	I0802 19:12:11.915698   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:12.414949   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:12.915014   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:13.415620   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:13.914868   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:14.414712   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:14.915354   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:15.415602   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:15.914853   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:16.414800   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:16.305514   75193 main.go:141] libmachine: (bridge-800809) DBG | Getting to WaitForSSH function...
	I0802 19:12:16.307869   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.308383   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.308414   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.308598   75193 main.go:141] libmachine: (bridge-800809) DBG | Using SSH client type: external
	I0802 19:12:16.308620   75193 main.go:141] libmachine: (bridge-800809) DBG | Using SSH private key: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa (-rw-------)
	I0802 19:12:16.308637   75193 main.go:141] libmachine: (bridge-800809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0802 19:12:16.308646   75193 main.go:141] libmachine: (bridge-800809) DBG | About to run SSH command:
	I0802 19:12:16.308655   75193 main.go:141] libmachine: (bridge-800809) DBG | exit 0
	I0802 19:12:16.435356   75193 main.go:141] libmachine: (bridge-800809) DBG | SSH cmd err, output: <nil>: 
	I0802 19:12:16.435666   75193 main.go:141] libmachine: (bridge-800809) KVM machine creation complete!
	I0802 19:12:16.435995   75193 main.go:141] libmachine: (bridge-800809) Calling .GetConfigRaw
	I0802 19:12:16.436646   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:16.436874   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:16.437042   75193 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0802 19:12:16.437057   75193 main.go:141] libmachine: (bridge-800809) Calling .GetState
	I0802 19:12:16.438304   75193 main.go:141] libmachine: Detecting operating system of created instance...
	I0802 19:12:16.438316   75193 main.go:141] libmachine: Waiting for SSH to be available...
	I0802 19:12:16.438322   75193 main.go:141] libmachine: Getting to WaitForSSH function...
	I0802 19:12:16.438327   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:16.440520   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.440917   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.440943   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.441090   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:16.441261   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.441449   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.441604   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:16.441774   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:16.442011   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:16.442024   75193 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0802 19:12:16.546317   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:12:16.546340   75193 main.go:141] libmachine: Detecting the provisioner...
	I0802 19:12:16.546347   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:16.549170   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.549518   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.549564   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.549767   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:16.549957   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.550117   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.550253   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:16.550426   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:16.550596   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:16.550606   75193 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0802 19:12:16.659539   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0802 19:12:16.659604   75193 main.go:141] libmachine: found compatible host: buildroot
	I0802 19:12:16.659611   75193 main.go:141] libmachine: Provisioning with buildroot...
	I0802 19:12:16.659618   75193 main.go:141] libmachine: (bridge-800809) Calling .GetMachineName
	I0802 19:12:16.659898   75193 buildroot.go:166] provisioning hostname "bridge-800809"
	I0802 19:12:16.659930   75193 main.go:141] libmachine: (bridge-800809) Calling .GetMachineName
	I0802 19:12:16.660113   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:16.662842   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.663206   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.663238   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.663434   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:16.663640   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.663783   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.663943   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:16.664095   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:16.664253   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:16.664274   75193 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-800809 && echo "bridge-800809" | sudo tee /etc/hostname
	I0802 19:12:16.785036   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-800809
	
	I0802 19:12:16.785066   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:16.788091   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.788514   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.788544   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.788728   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:16.788906   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.789098   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:16.789256   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:16.789452   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:16.789636   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:16.789654   75193 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-800809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-800809/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-800809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0802 19:12:16.903884   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0802 19:12:16.903921   75193 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19355-5397/.minikube CaCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19355-5397/.minikube}
	I0802 19:12:16.903944   75193 buildroot.go:174] setting up certificates
	I0802 19:12:16.903955   75193 provision.go:84] configureAuth start
	I0802 19:12:16.903966   75193 main.go:141] libmachine: (bridge-800809) Calling .GetMachineName
	I0802 19:12:16.904256   75193 main.go:141] libmachine: (bridge-800809) Calling .GetIP
	I0802 19:12:16.907334   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.907737   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.907772   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.907974   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:16.910682   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.911137   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:16.911174   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:16.911330   75193 provision.go:143] copyHostCerts
	I0802 19:12:16.911400   75193 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem, removing ...
	I0802 19:12:16.911413   75193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem
	I0802 19:12:16.911477   75193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/ca.pem (1078 bytes)
	I0802 19:12:16.911604   75193 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem, removing ...
	I0802 19:12:16.911615   75193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem
	I0802 19:12:16.911656   75193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/cert.pem (1123 bytes)
	I0802 19:12:16.911745   75193 exec_runner.go:144] found /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem, removing ...
	I0802 19:12:16.911754   75193 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem
	I0802 19:12:16.911792   75193 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19355-5397/.minikube/key.pem (1679 bytes)
	I0802 19:12:16.911872   75193 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem org=jenkins.bridge-800809 san=[127.0.0.1 192.168.39.217 bridge-800809 localhost minikube]
	I0802 19:12:17.133295   75193 provision.go:177] copyRemoteCerts
	I0802 19:12:17.133359   75193 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0802 19:12:17.133389   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.136348   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.136748   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.136780   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.136932   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.137156   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.137325   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.137501   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:17.225231   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0802 19:12:17.250298   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0802 19:12:17.273378   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0802 19:12:17.300993   75193 provision.go:87] duration metric: took 397.025492ms to configureAuth
	I0802 19:12:17.301027   75193 buildroot.go:189] setting minikube options for container-runtime
	I0802 19:12:17.301190   75193 config.go:182] Loaded profile config "bridge-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:12:17.301282   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.304190   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.304596   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.304630   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.304797   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.305007   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.305184   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.305403   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.305587   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:17.307455   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:17.307485   75193 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0802 19:12:17.584831   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0802 19:12:17.584860   75193 main.go:141] libmachine: Checking connection to Docker...
	I0802 19:12:17.584868   75193 main.go:141] libmachine: (bridge-800809) Calling .GetURL
	I0802 19:12:17.586284   75193 main.go:141] libmachine: (bridge-800809) DBG | Using libvirt version 6000000
	I0802 19:12:17.588701   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.589051   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.589089   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.589200   75193 main.go:141] libmachine: Docker is up and running!
	I0802 19:12:17.589215   75193 main.go:141] libmachine: Reticulating splines...
	I0802 19:12:17.589224   75193 client.go:171] duration metric: took 27.150963663s to LocalClient.Create
	I0802 19:12:17.589266   75193 start.go:167] duration metric: took 27.151030945s to libmachine.API.Create "bridge-800809"
	I0802 19:12:17.589278   75193 start.go:293] postStartSetup for "bridge-800809" (driver="kvm2")
	I0802 19:12:17.589297   75193 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0802 19:12:17.589323   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:17.589584   75193 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0802 19:12:17.589610   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.591564   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.591969   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.591993   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.592149   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.592358   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.592539   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.592725   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:17.678103   75193 ssh_runner.go:195] Run: cat /etc/os-release
	I0802 19:12:17.682550   75193 info.go:137] Remote host: Buildroot 2023.02.9
	I0802 19:12:17.682577   75193 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/addons for local assets ...
	I0802 19:12:17.682667   75193 filesync.go:126] Scanning /home/jenkins/minikube-integration/19355-5397/.minikube/files for local assets ...
	I0802 19:12:17.682768   75193 filesync.go:149] local asset: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem -> 125472.pem in /etc/ssl/certs
	I0802 19:12:17.682880   75193 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0802 19:12:17.692311   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:12:17.714147   75193 start.go:296] duration metric: took 124.853654ms for postStartSetup
	I0802 19:12:17.714196   75193 main.go:141] libmachine: (bridge-800809) Calling .GetConfigRaw
	I0802 19:12:17.714810   75193 main.go:141] libmachine: (bridge-800809) Calling .GetIP
	I0802 19:12:17.717241   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.717593   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.717622   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.717877   75193 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/config.json ...
	I0802 19:12:17.718053   75193 start.go:128] duration metric: took 27.302084914s to createHost
	I0802 19:12:17.718079   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.720606   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.720996   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.721022   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.721184   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.721372   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.721539   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.721710   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.721882   75193 main.go:141] libmachine: Using SSH client type: native
	I0802 19:12:17.722040   75193 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0802 19:12:17.722051   75193 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0802 19:12:17.827523   75193 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722625937.802029901
	
	I0802 19:12:17.827566   75193 fix.go:216] guest clock: 1722625937.802029901
	I0802 19:12:17.827575   75193 fix.go:229] Guest: 2024-08-02 19:12:17.802029901 +0000 UTC Remote: 2024-08-02 19:12:17.718066905 +0000 UTC m=+28.764503981 (delta=83.962996ms)
	I0802 19:12:17.827630   75193 fix.go:200] guest clock delta is within tolerance: 83.962996ms
	I0802 19:12:17.827640   75193 start.go:83] releasing machines lock for "bridge-800809", held for 27.411831635s
	I0802 19:12:17.827669   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:17.828080   75193 main.go:141] libmachine: (bridge-800809) Calling .GetIP
	I0802 19:12:17.830829   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.831385   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.831422   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.831590   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:17.832056   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:17.832267   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:17.832363   75193 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0802 19:12:17.832420   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.832715   75193 ssh_runner.go:195] Run: cat /version.json
	I0802 19:12:17.832741   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:17.835248   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.835900   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.836297   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.836334   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.836409   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.836626   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:17.836633   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.836654   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:17.836829   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:17.836847   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.837039   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:17.837047   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:17.837159   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:17.837299   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:17.952043   75193 ssh_runner.go:195] Run: systemctl --version
	I0802 19:12:17.959767   75193 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0802 19:12:18.124236   75193 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0802 19:12:18.129989   75193 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0802 19:12:18.130076   75193 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0802 19:12:18.145749   75193 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0802 19:12:18.145772   75193 start.go:495] detecting cgroup driver to use...
	I0802 19:12:18.145853   75193 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0802 19:12:18.162511   75193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0802 19:12:18.177223   75193 docker.go:217] disabling cri-docker service (if available) ...
	I0802 19:12:18.177292   75193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0802 19:12:18.191444   75193 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0802 19:12:18.206118   75193 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0802 19:12:18.327654   75193 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0802 19:12:18.511869   75193 docker.go:233] disabling docker service ...
	I0802 19:12:18.511942   75193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0802 19:12:18.526449   75193 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0802 19:12:18.540362   75193 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0802 19:12:18.661472   75193 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0802 19:12:18.794787   75193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0802 19:12:18.810299   75193 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0802 19:12:18.828416   75193 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0802 19:12:18.828469   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.838447   75193 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0802 19:12:18.838515   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.848019   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.857431   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.866948   75193 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0802 19:12:18.877032   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.887280   75193 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.904012   75193 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0802 19:12:18.915318   75193 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0802 19:12:18.927838   75193 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0802 19:12:18.927904   75193 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0802 19:12:18.945657   75193 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0802 19:12:18.956854   75193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:12:19.069362   75193 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0802 19:12:19.205717   75193 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0802 19:12:19.205778   75193 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0802 19:12:19.210506   75193 start.go:563] Will wait 60s for crictl version
	I0802 19:12:19.210555   75193 ssh_runner.go:195] Run: which crictl
	I0802 19:12:19.214101   75193 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0802 19:12:19.260705   75193 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0802 19:12:19.260794   75193 ssh_runner.go:195] Run: crio --version
	I0802 19:12:19.287812   75193 ssh_runner.go:195] Run: crio --version
	I0802 19:12:19.318772   75193 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0802 19:12:16.915733   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:17.415507   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:17.915631   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:18.415384   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:18.914836   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:19.415786   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:19.915584   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:20.415599   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:20.914685   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:21.415483   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:21.915455   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:22.414929   73373 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:22.657360   73373 kubeadm.go:1113] duration metric: took 12.935137345s to wait for elevateKubeSystemPrivileges
	I0802 19:12:22.657394   73373 kubeadm.go:394] duration metric: took 24.832258811s to StartCluster
	I0802 19:12:22.657415   73373 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:22.657487   73373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:12:22.659358   73373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:22.659614   73373 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:12:22.659734   73373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 19:12:22.659787   73373 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 19:12:22.659888   73373 addons.go:69] Setting storage-provisioner=true in profile "flannel-800809"
	I0802 19:12:22.659902   73373 addons.go:69] Setting default-storageclass=true in profile "flannel-800809"
	I0802 19:12:22.659929   73373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-800809"
	I0802 19:12:22.659930   73373 addons.go:234] Setting addon storage-provisioner=true in "flannel-800809"
	I0802 19:12:22.659958   73373 config.go:182] Loaded profile config "flannel-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:12:22.659981   73373 host.go:66] Checking if "flannel-800809" exists ...
	I0802 19:12:22.660406   73373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:22.660437   73373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:22.660464   73373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:22.660500   73373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:22.661335   73373 out.go:177] * Verifying Kubernetes components...
	I0802 19:12:22.662788   73373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:12:22.679160   73373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35779
	I0802 19:12:22.679828   73373 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:22.680226   73373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I0802 19:12:22.680562   73373 main.go:141] libmachine: Using API Version  1
	I0802 19:12:22.680590   73373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:22.681028   73373 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:22.681319   73373 main.go:141] libmachine: (flannel-800809) Calling .GetState
	I0802 19:12:22.681353   73373 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:22.681817   73373 main.go:141] libmachine: Using API Version  1
	I0802 19:12:22.681840   73373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:22.682155   73373 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:22.682614   73373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:22.682646   73373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:22.684794   73373 addons.go:234] Setting addon default-storageclass=true in "flannel-800809"
	I0802 19:12:22.684833   73373 host.go:66] Checking if "flannel-800809" exists ...
	I0802 19:12:22.685174   73373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:22.685199   73373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:22.701476   73373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I0802 19:12:22.702225   73373 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:22.702739   73373 main.go:141] libmachine: Using API Version  1
	I0802 19:12:22.702764   73373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:22.703145   73373 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:22.703353   73373 main.go:141] libmachine: (flannel-800809) Calling .GetState
	I0802 19:12:22.703795   73373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0802 19:12:22.704634   73373 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:22.705254   73373 main.go:141] libmachine: Using API Version  1
	I0802 19:12:22.705277   73373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:22.705335   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:12:22.705968   73373 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:22.706578   73373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:22.706632   73373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:22.709042   73373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 19:12:22.710304   73373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:12:22.710321   73373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 19:12:22.710339   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:12:22.713456   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:12:22.713924   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:12:22.713939   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:12:22.714069   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:12:22.714296   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:12:22.714404   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:12:22.714482   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:12:22.727883   73373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0802 19:12:22.728338   73373 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:22.728890   73373 main.go:141] libmachine: Using API Version  1
	I0802 19:12:22.728913   73373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:22.729290   73373 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:22.729466   73373 main.go:141] libmachine: (flannel-800809) Calling .GetState
	I0802 19:12:22.730947   73373 main.go:141] libmachine: (flannel-800809) Calling .DriverName
	I0802 19:12:22.731190   73373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 19:12:22.731205   73373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 19:12:22.731220   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHHostname
	I0802 19:12:22.734094   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:12:22.734452   73373 main.go:141] libmachine: (flannel-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:fa:99", ip: ""} in network mk-flannel-800809: {Iface:virbr2 ExpiryTime:2024-08-02 20:11:36 +0000 UTC Type:0 Mac:52:54:00:c6:fa:99 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:flannel-800809 Clientid:01:52:54:00:c6:fa:99}
	I0802 19:12:22.734507   73373 main.go:141] libmachine: (flannel-800809) DBG | domain flannel-800809 has defined IP address 192.168.50.5 and MAC address 52:54:00:c6:fa:99 in network mk-flannel-800809
	I0802 19:12:22.734630   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHPort
	I0802 19:12:22.734755   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHKeyPath
	I0802 19:12:22.734910   73373 main.go:141] libmachine: (flannel-800809) Calling .GetSSHUsername
	I0802 19:12:22.735022   73373 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/flannel-800809/id_rsa Username:docker}
	I0802 19:12:22.956405   73373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:12:23.038900   73373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:12:23.038975   73373 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 19:12:23.158487   73373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 19:12:23.607626   73373 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0802 19:12:23.607693   73373 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:23.607710   73373 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:23.607727   73373 main.go:141] libmachine: (flannel-800809) Calling .Close
	I0802 19:12:23.607712   73373 main.go:141] libmachine: (flannel-800809) Calling .Close
	I0802 19:12:23.609620   73373 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:23.609641   73373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:23.609651   73373 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:23.609660   73373 main.go:141] libmachine: (flannel-800809) Calling .Close
	I0802 19:12:23.609776   73373 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:23.609784   73373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:23.609793   73373 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:23.609808   73373 main.go:141] libmachine: (flannel-800809) Calling .Close
	I0802 19:12:23.610229   73373 main.go:141] libmachine: (flannel-800809) DBG | Closing plugin on server side
	I0802 19:12:23.610324   73373 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:23.610364   73373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:23.610801   73373 node_ready.go:35] waiting up to 15m0s for node "flannel-800809" to be "Ready" ...
	I0802 19:12:23.611708   73373 main.go:141] libmachine: (flannel-800809) DBG | Closing plugin on server side
	I0802 19:12:23.611750   73373 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:23.611759   73373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:23.647249   73373 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:23.647276   73373 main.go:141] libmachine: (flannel-800809) Calling .Close
	I0802 19:12:23.647584   73373 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:23.647636   73373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:23.647592   73373 main.go:141] libmachine: (flannel-800809) DBG | Closing plugin on server side
	I0802 19:12:23.649141   73373 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0802 19:12:19.320079   75193 main.go:141] libmachine: (bridge-800809) Calling .GetIP
	I0802 19:12:19.322960   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:19.323336   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:19.323362   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:19.323687   75193 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0802 19:12:19.327704   75193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:12:19.339464   75193 kubeadm.go:883] updating cluster {Name:bridge-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:bridge-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0802 19:12:19.339558   75193 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 19:12:19.339597   75193 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:12:19.370398   75193 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0802 19:12:19.370458   75193 ssh_runner.go:195] Run: which lz4
	I0802 19:12:19.374571   75193 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0802 19:12:19.378526   75193 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0802 19:12:19.378559   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0802 19:12:20.699514   75193 crio.go:462] duration metric: took 1.324985414s to copy over tarball
	I0802 19:12:20.699596   75193 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0802 19:12:23.104591   75193 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.404951919s)
	I0802 19:12:23.104630   75193 crio.go:469] duration metric: took 2.405084044s to extract the tarball
	I0802 19:12:23.104640   75193 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0802 19:12:23.159765   75193 ssh_runner.go:195] Run: sudo crictl images --output json
	I0802 19:12:23.205579   75193 crio.go:514] all images are preloaded for cri-o runtime.
	I0802 19:12:23.205607   75193 cache_images.go:84] Images are preloaded, skipping loading
	I0802 19:12:23.205619   75193 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.30.3 crio true true} ...
	I0802 19:12:23.205755   75193 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-800809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:bridge-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0802 19:12:23.205839   75193 ssh_runner.go:195] Run: crio config
	I0802 19:12:23.263041   75193 cni.go:84] Creating CNI manager for "bridge"
	I0802 19:12:23.263086   75193 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0802 19:12:23.263149   75193 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-800809 NodeName:bridge-800809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0802 19:12:23.263305   75193 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-800809"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0802 19:12:23.263379   75193 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0802 19:12:23.274695   75193 binaries.go:44] Found k8s binaries, skipping transfer
	I0802 19:12:23.274783   75193 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0802 19:12:23.285056   75193 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0802 19:12:23.302871   75193 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0802 19:12:23.319992   75193 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0802 19:12:23.337423   75193 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0802 19:12:23.341882   75193 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0802 19:12:23.357533   75193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:12:23.479926   75193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:12:23.497646   75193 certs.go:68] Setting up /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809 for IP: 192.168.39.217
	I0802 19:12:23.497669   75193 certs.go:194] generating shared ca certs ...
	I0802 19:12:23.497687   75193 certs.go:226] acquiring lock for ca certs: {Name:mk19e8091201ede09cfac599bd89999226caf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:23.497850   75193 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key
	I0802 19:12:23.497908   75193 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key
	I0802 19:12:23.497920   75193 certs.go:256] generating profile certs ...
	I0802 19:12:23.497982   75193 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.key
	I0802 19:12:23.497998   75193 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt with IP's: []
	I0802 19:12:23.780494   75193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt ...
	I0802 19:12:23.780523   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.crt: {Name:mk6d79385d84cde35ba63f1e39377134c97a4668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:23.780701   75193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.key ...
	I0802 19:12:23.780716   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/client.key: {Name:mk91ff7c20a12742080c4c3b28589065298bf144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:23.780818   75193 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key.ce9166f0
	I0802 19:12:23.780838   75193 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt.ce9166f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217]
	I0802 19:12:23.861010   75193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt.ce9166f0 ...
	I0802 19:12:23.861045   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt.ce9166f0: {Name:mk2bf665a2b367ab259bc638243a7580794de0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:23.861227   75193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key.ce9166f0 ...
	I0802 19:12:23.861244   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key.ce9166f0: {Name:mk6f19721002b2c31a6225e914ebc265bd9ee3a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:23.861341   75193 certs.go:381] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt.ce9166f0 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt
	I0802 19:12:23.861441   75193 certs.go:385] copying /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key.ce9166f0 -> /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key
	I0802 19:12:23.861499   75193 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.key
	I0802 19:12:23.861513   75193 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.crt with IP's: []
	I0802 19:12:24.015265   75193 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.crt ...
	I0802 19:12:24.015298   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.crt: {Name:mk96c0d262fb9f2102d4e8c5405f62e005866bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:24.015461   75193 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.key ...
	I0802 19:12:24.015474   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.key: {Name:mk00783a7287142eebca5616c32bf367e13f943c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:24.015635   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem (1338 bytes)
	W0802 19:12:24.015667   75193 certs.go:480] ignoring /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547_empty.pem, impossibly tiny 0 bytes
	I0802 19:12:24.015674   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca-key.pem (1679 bytes)
	I0802 19:12:24.015697   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/ca.pem (1078 bytes)
	I0802 19:12:24.015720   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/cert.pem (1123 bytes)
	I0802 19:12:24.015738   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/certs/key.pem (1679 bytes)
	I0802 19:12:24.015777   75193 certs.go:484] found cert: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem (1708 bytes)
	I0802 19:12:24.016358   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0802 19:12:24.041946   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0802 19:12:24.064928   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0802 19:12:24.086890   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0802 19:12:24.110024   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0802 19:12:24.132631   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0802 19:12:24.159174   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0802 19:12:24.187622   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/bridge-800809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0802 19:12:24.211947   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0802 19:12:24.237779   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/certs/12547.pem --> /usr/share/ca-certificates/12547.pem (1338 bytes)
	I0802 19:12:24.260948   75193 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/ssl/certs/125472.pem --> /usr/share/ca-certificates/125472.pem (1708 bytes)
	I0802 19:12:24.284197   75193 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0802 19:12:24.299222   75193 ssh_runner.go:195] Run: openssl version
	I0802 19:12:24.304693   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0802 19:12:24.315256   75193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:12:24.319665   75193 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  2 17:28 /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:12:24.319732   75193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0802 19:12:24.325474   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0802 19:12:24.335272   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12547.pem && ln -fs /usr/share/ca-certificates/12547.pem /etc/ssl/certs/12547.pem"
	I0802 19:12:24.345445   75193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12547.pem
	I0802 19:12:24.349628   75193 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  2 17:40 /usr/share/ca-certificates/12547.pem
	I0802 19:12:24.349690   75193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12547.pem
	I0802 19:12:24.355330   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12547.pem /etc/ssl/certs/51391683.0"
	I0802 19:12:24.365634   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/125472.pem && ln -fs /usr/share/ca-certificates/125472.pem /etc/ssl/certs/125472.pem"
	I0802 19:12:24.375806   75193 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/125472.pem
	I0802 19:12:24.380025   75193 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  2 17:40 /usr/share/ca-certificates/125472.pem
	I0802 19:12:24.380077   75193 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/125472.pem
	I0802 19:12:24.385600   75193 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/125472.pem /etc/ssl/certs/3ec20f2e.0"
	I0802 19:12:24.395153   75193 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0802 19:12:24.398785   75193 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0802 19:12:24.398851   75193 kubeadm.go:392] StartCluster: {Name:bridge-800809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:bridge-800809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 19:12:24.398920   75193 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0802 19:12:24.398974   75193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0802 19:12:24.437132   75193 cri.go:89] found id: ""
	I0802 19:12:24.437207   75193 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0802 19:12:24.446770   75193 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0802 19:12:24.455740   75193 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0802 19:12:24.465253   75193 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0802 19:12:24.465270   75193 kubeadm.go:157] found existing configuration files:
	
	I0802 19:12:24.465324   75193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0802 19:12:24.474207   75193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0802 19:12:24.474261   75193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0802 19:12:24.483190   75193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0802 19:12:24.491641   75193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0802 19:12:24.491701   75193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0802 19:12:24.500197   75193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0802 19:12:24.508996   75193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0802 19:12:24.509063   75193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0802 19:12:24.518056   75193 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0802 19:12:24.526422   75193 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0802 19:12:24.526467   75193 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0802 19:12:24.536722   75193 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0802 19:12:24.589482   75193 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0802 19:12:24.589560   75193 kubeadm.go:310] [preflight] Running pre-flight checks
	I0802 19:12:24.706338   75193 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0802 19:12:24.706440   75193 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0802 19:12:24.706549   75193 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0802 19:12:24.900259   75193 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0802 19:12:23.650211   73373 addons.go:510] duration metric: took 990.432446ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0802 19:12:24.114834   73373 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-800809" context rescaled to 1 replicas
	I0802 19:12:25.615165   73373 node_ready.go:53] node "flannel-800809" has status "Ready":"False"
	I0802 19:12:25.004054   75193 out.go:204]   - Generating certificates and keys ...
	I0802 19:12:25.004204   75193 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0802 19:12:25.004316   75193 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0802 19:12:25.175526   75193 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0802 19:12:25.561760   75193 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0802 19:12:26.054499   75193 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0802 19:12:26.358987   75193 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0802 19:12:26.705022   75193 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0802 19:12:26.705240   75193 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-800809 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0802 19:12:26.848800   75193 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0802 19:12:26.849134   75193 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-800809 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0802 19:12:27.154661   75193 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0802 19:12:27.210136   75193 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0802 19:12:27.334288   75193 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0802 19:12:27.334518   75193 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0802 19:12:27.418951   75193 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0802 19:12:27.526840   75193 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0802 19:12:27.784216   75193 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0802 19:12:27.904018   75193 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0802 19:12:28.068523   75193 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0802 19:12:28.069038   75193 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0802 19:12:28.072981   75193 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0802 19:12:28.074855   75193 out.go:204]   - Booting up control plane ...
	I0802 19:12:28.074950   75193 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0802 19:12:28.075038   75193 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0802 19:12:28.075132   75193 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0802 19:12:28.093122   75193 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0802 19:12:28.093204   75193 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0802 19:12:28.093237   75193 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0802 19:12:28.228105   75193 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0802 19:12:28.228228   75193 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0802 19:12:28.729232   75193 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.610248ms
	I0802 19:12:28.729350   75193 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0802 19:12:28.115252   73373 node_ready.go:53] node "flannel-800809" has status "Ready":"False"
	I0802 19:12:30.614836   73373 node_ready.go:53] node "flannel-800809" has status "Ready":"False"
	I0802 19:12:33.727684   75193 kubeadm.go:310] [api-check] The API server is healthy after 5.001501231s
	I0802 19:12:33.745104   75193 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0802 19:12:33.763143   75193 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0802 19:12:33.785887   75193 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0802 19:12:33.786077   75193 kubeadm.go:310] [mark-control-plane] Marking the node bridge-800809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0802 19:12:33.798343   75193 kubeadm.go:310] [bootstrap-token] Using token: bqf6gf.15yboeq8gzijnqor
	I0802 19:12:33.799779   75193 out.go:204]   - Configuring RBAC rules ...
	I0802 19:12:33.799941   75193 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0802 19:12:33.805173   75193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0802 19:12:33.812998   75193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0802 19:12:33.980353   75193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0802 19:12:33.988325   75193 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0802 19:12:33.996647   75193 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0802 19:12:34.135857   75193 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0802 19:12:34.562638   75193 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0802 19:12:35.135171   75193 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0802 19:12:35.136152   75193 kubeadm.go:310] 
	I0802 19:12:35.136263   75193 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0802 19:12:35.136285   75193 kubeadm.go:310] 
	I0802 19:12:35.136399   75193 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0802 19:12:35.136408   75193 kubeadm.go:310] 
	I0802 19:12:35.136462   75193 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0802 19:12:35.136520   75193 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0802 19:12:35.136563   75193 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0802 19:12:35.136570   75193 kubeadm.go:310] 
	I0802 19:12:35.136615   75193 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0802 19:12:35.136622   75193 kubeadm.go:310] 
	I0802 19:12:35.136672   75193 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0802 19:12:35.136678   75193 kubeadm.go:310] 
	I0802 19:12:35.136726   75193 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0802 19:12:35.136847   75193 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0802 19:12:35.136958   75193 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0802 19:12:35.136974   75193 kubeadm.go:310] 
	I0802 19:12:35.137078   75193 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0802 19:12:35.137174   75193 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0802 19:12:35.137184   75193 kubeadm.go:310] 
	I0802 19:12:35.137299   75193 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bqf6gf.15yboeq8gzijnqor \
	I0802 19:12:35.137444   75193 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 \
	I0802 19:12:35.137472   75193 kubeadm.go:310] 	--control-plane 
	I0802 19:12:35.137481   75193 kubeadm.go:310] 
	I0802 19:12:35.137588   75193 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0802 19:12:35.137597   75193 kubeadm.go:310] 
	I0802 19:12:35.137698   75193 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bqf6gf.15yboeq8gzijnqor \
	I0802 19:12:35.137853   75193 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8e17f8e233f5f0f5930eeaec110fa90cc15f37fda0629a74cb0b75be66c2ad6 
	I0802 19:12:35.138018   75193 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0802 19:12:35.138032   75193 cni.go:84] Creating CNI manager for "bridge"
	I0802 19:12:35.139873   75193 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0802 19:12:32.614484   73373 node_ready.go:49] node "flannel-800809" has status "Ready":"True"
	I0802 19:12:32.614509   73373 node_ready.go:38] duration metric: took 9.003663366s for node "flannel-800809" to be "Ready" ...
	I0802 19:12:32.614517   73373 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:12:32.621438   73373 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:34.627722   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:36.628390   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:35.141156   75193 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0802 19:12:35.151585   75193 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0802 19:12:35.171783   75193 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0802 19:12:35.171864   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:35.171871   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-800809 minikube.k8s.io/updated_at=2024_08_02T19_12_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9 minikube.k8s.io/name=bridge-800809 minikube.k8s.io/primary=true
	I0802 19:12:35.208419   75193 ops.go:34] apiserver oom_adj: -16
	I0802 19:12:35.280187   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:35.781129   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:36.280605   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:36.780608   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:37.281052   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:37.780366   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:38.281227   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:38.780432   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:39.127537   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:41.628305   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:39.280972   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:39.781229   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:40.280979   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:40.781125   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:41.281094   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:41.780907   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:42.280287   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:42.781055   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:43.280412   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:43.780602   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:44.127986   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:46.128398   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:44.281222   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:44.780914   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:45.280901   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:45.781019   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:46.281000   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:46.780354   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:47.280417   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:47.780433   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:48.281087   75193 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0802 19:12:48.387665   75193 kubeadm.go:1113] duration metric: took 13.21586494s to wait for elevateKubeSystemPrivileges
	I0802 19:12:48.387702   75193 kubeadm.go:394] duration metric: took 23.988862741s to StartCluster
	I0802 19:12:48.387722   75193 settings.go:142] acquiring lock: {Name:mk582558c1d72084a3bea637f0d8fe9acdbf5ec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:48.387791   75193 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 19:12:48.389325   75193 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/kubeconfig: {Name:mk495788848327cf9c932ebb1021f6839ea3b495 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 19:12:48.389558   75193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0802 19:12:48.389586   75193 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0802 19:12:48.389650   75193 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0802 19:12:48.389724   75193 addons.go:69] Setting storage-provisioner=true in profile "bridge-800809"
	I0802 19:12:48.389768   75193 addons.go:234] Setting addon storage-provisioner=true in "bridge-800809"
	I0802 19:12:48.389769   75193 addons.go:69] Setting default-storageclass=true in profile "bridge-800809"
	I0802 19:12:48.389807   75193 host.go:66] Checking if "bridge-800809" exists ...
	I0802 19:12:48.389825   75193 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-800809"
	I0802 19:12:48.389807   75193 config.go:182] Loaded profile config "bridge-800809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 19:12:48.390324   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:48.390356   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:48.390362   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:48.390374   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:48.391209   75193 out.go:177] * Verifying Kubernetes components...
	I0802 19:12:48.392578   75193 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0802 19:12:48.405943   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0802 19:12:48.406396   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:48.406955   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:12:48.406982   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:48.407364   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:48.407854   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:48.407880   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:48.410533   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33405
	I0802 19:12:48.411091   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:48.411569   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:12:48.411594   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:48.411906   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:48.412083   75193 main.go:141] libmachine: (bridge-800809) Calling .GetState
	I0802 19:12:48.415357   75193 addons.go:234] Setting addon default-storageclass=true in "bridge-800809"
	I0802 19:12:48.415391   75193 host.go:66] Checking if "bridge-800809" exists ...
	I0802 19:12:48.417917   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:48.417950   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:48.426395   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44123
	I0802 19:12:48.426906   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:48.427498   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:12:48.427523   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:48.427904   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:48.428486   75193 main.go:141] libmachine: (bridge-800809) Calling .GetState
	I0802 19:12:48.430436   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:48.432680   75193 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0802 19:12:48.434102   75193 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:12:48.434122   75193 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0802 19:12:48.434141   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:48.434413   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37019
	I0802 19:12:48.435250   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:48.436014   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:12:48.436047   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:48.436449   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:48.437207   75193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 19:12:48.437242   75193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 19:12:48.437605   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:48.438168   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:48.438194   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:48.438433   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:48.438636   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:48.438838   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:48.439011   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:48.453919   75193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0802 19:12:48.454371   75193 main.go:141] libmachine: () Calling .GetVersion
	I0802 19:12:48.454914   75193 main.go:141] libmachine: Using API Version  1
	I0802 19:12:48.454929   75193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 19:12:48.455326   75193 main.go:141] libmachine: () Calling .GetMachineName
	I0802 19:12:48.455550   75193 main.go:141] libmachine: (bridge-800809) Calling .GetState
	I0802 19:12:48.457279   75193 main.go:141] libmachine: (bridge-800809) Calling .DriverName
	I0802 19:12:48.457525   75193 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0802 19:12:48.457540   75193 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0802 19:12:48.457554   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHHostname
	I0802 19:12:48.460482   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:48.460785   75193 main.go:141] libmachine: (bridge-800809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:09:00", ip: ""} in network mk-bridge-800809: {Iface:virbr3 ExpiryTime:2024-08-02 20:12:06 +0000 UTC Type:0 Mac:52:54:00:ca:09:00 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:bridge-800809 Clientid:01:52:54:00:ca:09:00}
	I0802 19:12:48.460983   75193 main.go:141] libmachine: (bridge-800809) DBG | domain bridge-800809 has defined IP address 192.168.39.217 and MAC address 52:54:00:ca:09:00 in network mk-bridge-800809
	I0802 19:12:48.461054   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHPort
	I0802 19:12:48.461263   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHKeyPath
	I0802 19:12:48.461419   75193 main.go:141] libmachine: (bridge-800809) Calling .GetSSHUsername
	I0802 19:12:48.461612   75193 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/bridge-800809/id_rsa Username:docker}
	I0802 19:12:48.623544   75193 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0802 19:12:48.623611   75193 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0802 19:12:48.760972   75193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0802 19:12:48.764480   75193 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0802 19:12:49.211975   75193 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0802 19:12:49.212074   75193 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:49.212094   75193 main.go:141] libmachine: (bridge-800809) Calling .Close
	I0802 19:12:49.212377   75193 main.go:141] libmachine: (bridge-800809) DBG | Closing plugin on server side
	I0802 19:12:49.212400   75193 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:49.212412   75193 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:49.212427   75193 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:49.212436   75193 main.go:141] libmachine: (bridge-800809) Calling .Close
	I0802 19:12:49.212712   75193 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:49.212747   75193 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:49.213652   75193 node_ready.go:35] waiting up to 15m0s for node "bridge-800809" to be "Ready" ...
	I0802 19:12:49.234253   75193 node_ready.go:49] node "bridge-800809" has status "Ready":"True"
	I0802 19:12:49.234273   75193 node_ready.go:38] duration metric: took 20.595503ms for node "bridge-800809" to be "Ready" ...
	I0802 19:12:49.234286   75193 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:12:49.247634   75193 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:49.247664   75193 main.go:141] libmachine: (bridge-800809) Calling .Close
	I0802 19:12:49.247904   75193 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:49.247964   75193 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:49.247991   75193 main.go:141] libmachine: (bridge-800809) DBG | Closing plugin on server side
	I0802 19:12:49.251549   75193 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.727647   75193 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:49.727674   75193 main.go:141] libmachine: (bridge-800809) Calling .Close
	I0802 19:12:49.728001   75193 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:49.728021   75193 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:49.728032   75193 main.go:141] libmachine: Making call to close driver server
	I0802 19:12:49.728040   75193 main.go:141] libmachine: (bridge-800809) Calling .Close
	I0802 19:12:49.728272   75193 main.go:141] libmachine: Successfully made call to close driver server
	I0802 19:12:49.728404   75193 main.go:141] libmachine: Making call to close connection to plugin binary
	I0802 19:12:49.728305   75193 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-800809" context rescaled to 1 replicas
	I0802 19:12:49.728335   75193 main.go:141] libmachine: (bridge-800809) DBG | Closing plugin on server side
	I0802 19:12:49.729853   75193 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0802 19:12:48.128861   73373 pod_ready.go:102] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:49.131931   73373 pod_ready.go:92] pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.131960   73373 pod_ready.go:81] duration metric: took 16.510491582s for pod "coredns-7db6d8ff4d-n59rs" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.131972   73373 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.138318   73373 pod_ready.go:92] pod "etcd-flannel-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.138336   73373 pod_ready.go:81] duration metric: took 6.35742ms for pod "etcd-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.138345   73373 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.143670   73373 pod_ready.go:92] pod "kube-apiserver-flannel-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.143699   73373 pod_ready.go:81] duration metric: took 5.34653ms for pod "kube-apiserver-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.143715   73373 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.147641   73373 pod_ready.go:92] pod "kube-controller-manager-flannel-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.147661   73373 pod_ready.go:81] duration metric: took 3.938378ms for pod "kube-controller-manager-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.147673   73373 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-tnw7q" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.151857   73373 pod_ready.go:92] pod "kube-proxy-tnw7q" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.151881   73373 pod_ready.go:81] duration metric: took 4.200828ms for pod "kube-proxy-tnw7q" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.151892   73373 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.526195   73373 pod_ready.go:92] pod "kube-scheduler-flannel-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:12:49.526217   73373 pod_ready.go:81] duration metric: took 374.318187ms for pod "kube-scheduler-flannel-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:12:49.526226   73373 pod_ready.go:38] duration metric: took 16.911698171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:12:49.526240   73373 api_server.go:52] waiting for apiserver process to appear ...
	I0802 19:12:49.526284   73373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:12:49.541366   73373 api_server.go:72] duration metric: took 26.881711409s to wait for apiserver process to appear ...
	I0802 19:12:49.541391   73373 api_server.go:88] waiting for apiserver healthz status ...
	I0802 19:12:49.541414   73373 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I0802 19:12:49.546469   73373 api_server.go:279] https://192.168.50.5:8443/healthz returned 200:
	ok
	I0802 19:12:49.547600   73373 api_server.go:141] control plane version: v1.30.3
	I0802 19:12:49.547627   73373 api_server.go:131] duration metric: took 6.228546ms to wait for apiserver health ...
	I0802 19:12:49.547637   73373 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 19:12:49.728652   73373 system_pods.go:59] 7 kube-system pods found
	I0802 19:12:49.728685   73373 system_pods.go:61] "coredns-7db6d8ff4d-n59rs" [450a63b5-55c6-40b7-985b-f444ed2d9fba] Running
	I0802 19:12:49.728692   73373 system_pods.go:61] "etcd-flannel-800809" [929ed6c8-4ba2-4614-8bd6-a1f5bb842702] Running
	I0802 19:12:49.728697   73373 system_pods.go:61] "kube-apiserver-flannel-800809" [1424e1f7-4929-40dd-ac34-73e3b1ea59c2] Running
	I0802 19:12:49.728704   73373 system_pods.go:61] "kube-controller-manager-flannel-800809" [02d0cf02-543d-4def-89d2-8fca3a15a0cc] Running
	I0802 19:12:49.728712   73373 system_pods.go:61] "kube-proxy-tnw7q" [a653fa05-c53f-459f-b85e-93f3a80e3be5] Running
	I0802 19:12:49.728716   73373 system_pods.go:61] "kube-scheduler-flannel-800809" [94d72541-3fef-4cf6-b7ef-6e69eb8d763f] Running
	I0802 19:12:49.728723   73373 system_pods.go:61] "storage-provisioner" [9078e148-1a3b-4849-877f-d4f664235a43] Running
	I0802 19:12:49.728731   73373 system_pods.go:74] duration metric: took 181.087258ms to wait for pod list to return data ...
	I0802 19:12:49.728743   73373 default_sa.go:34] waiting for default service account to be created ...
	I0802 19:12:49.924765   73373 default_sa.go:45] found service account: "default"
	I0802 19:12:49.924790   73373 default_sa.go:55] duration metric: took 196.038261ms for default service account to be created ...
	I0802 19:12:49.924799   73373 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 19:12:50.128518   73373 system_pods.go:86] 7 kube-system pods found
	I0802 19:12:50.128551   73373 system_pods.go:89] "coredns-7db6d8ff4d-n59rs" [450a63b5-55c6-40b7-985b-f444ed2d9fba] Running
	I0802 19:12:50.128560   73373 system_pods.go:89] "etcd-flannel-800809" [929ed6c8-4ba2-4614-8bd6-a1f5bb842702] Running
	I0802 19:12:50.128566   73373 system_pods.go:89] "kube-apiserver-flannel-800809" [1424e1f7-4929-40dd-ac34-73e3b1ea59c2] Running
	I0802 19:12:50.128572   73373 system_pods.go:89] "kube-controller-manager-flannel-800809" [02d0cf02-543d-4def-89d2-8fca3a15a0cc] Running
	I0802 19:12:50.128578   73373 system_pods.go:89] "kube-proxy-tnw7q" [a653fa05-c53f-459f-b85e-93f3a80e3be5] Running
	I0802 19:12:50.128583   73373 system_pods.go:89] "kube-scheduler-flannel-800809" [94d72541-3fef-4cf6-b7ef-6e69eb8d763f] Running
	I0802 19:12:50.128593   73373 system_pods.go:89] "storage-provisioner" [9078e148-1a3b-4849-877f-d4f664235a43] Running
	I0802 19:12:50.128603   73373 system_pods.go:126] duration metric: took 203.799009ms to wait for k8s-apps to be running ...
	I0802 19:12:50.128616   73373 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 19:12:50.128668   73373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:12:50.146666   73373 system_svc.go:56] duration metric: took 18.039008ms WaitForService to wait for kubelet
	I0802 19:12:50.146702   73373 kubeadm.go:582] duration metric: took 27.487050042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:12:50.146728   73373 node_conditions.go:102] verifying NodePressure condition ...
	I0802 19:12:50.325380   73373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 19:12:50.325404   73373 node_conditions.go:123] node cpu capacity is 2
	I0802 19:12:50.325415   73373 node_conditions.go:105] duration metric: took 178.680912ms to run NodePressure ...
	I0802 19:12:50.325426   73373 start.go:241] waiting for startup goroutines ...
	I0802 19:12:50.325432   73373 start.go:246] waiting for cluster config update ...
	I0802 19:12:50.325443   73373 start.go:255] writing updated cluster config ...
	I0802 19:12:50.325714   73373 ssh_runner.go:195] Run: rm -f paused
	I0802 19:12:50.372070   73373 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 19:12:50.373849   73373 out.go:177] * Done! kubectl is now configured to use "flannel-800809" cluster and "default" namespace by default
	I0802 19:12:49.731163   75193 addons.go:510] duration metric: took 1.341512478s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0802 19:12:51.257274   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:53.257732   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:55.257866   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:12:57.758927   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:00.258370   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:02.756877   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:04.757945   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:07.258436   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:09.757346   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:11.757579   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:13.758039   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:16.257374   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:18.257937   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:20.258084   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:22.258126   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:24.757470   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:27.257825   75193 pod_ready.go:102] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"False"
	I0802 19:13:29.257281   75193 pod_ready.go:92] pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.257324   75193 pod_ready.go:81] duration metric: took 40.005751555s for pod "coredns-7db6d8ff4d-7v5ln" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.257337   75193 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-zrfqs" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.259258   75193 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-zrfqs" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-zrfqs" not found
	I0802 19:13:29.259286   75193 pod_ready.go:81] duration metric: took 1.937219ms for pod "coredns-7db6d8ff4d-zrfqs" in "kube-system" namespace to be "Ready" ...
	E0802 19:13:29.259297   75193 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-zrfqs" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-zrfqs" not found
	I0802 19:13:29.259303   75193 pod_ready.go:78] waiting up to 15m0s for pod "etcd-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.263672   75193 pod_ready.go:92] pod "etcd-bridge-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.263693   75193 pod_ready.go:81] duration metric: took 4.38474ms for pod "etcd-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.263703   75193 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.268684   75193 pod_ready.go:92] pod "kube-apiserver-bridge-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.268711   75193 pod_ready.go:81] duration metric: took 4.999699ms for pod "kube-apiserver-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.268725   75193 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.273639   75193 pod_ready.go:92] pod "kube-controller-manager-bridge-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.273657   75193 pod_ready.go:81] duration metric: took 4.925321ms for pod "kube-controller-manager-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.273666   75193 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-sg47p" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.456055   75193 pod_ready.go:92] pod "kube-proxy-sg47p" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.456079   75193 pod_ready.go:81] duration metric: took 182.40732ms for pod "kube-proxy-sg47p" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.456088   75193 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.856525   75193 pod_ready.go:92] pod "kube-scheduler-bridge-800809" in "kube-system" namespace has status "Ready":"True"
	I0802 19:13:29.856549   75193 pod_ready.go:81] duration metric: took 400.453989ms for pod "kube-scheduler-bridge-800809" in "kube-system" namespace to be "Ready" ...
	I0802 19:13:29.856559   75193 pod_ready.go:38] duration metric: took 40.622262236s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0802 19:13:29.856575   75193 api_server.go:52] waiting for apiserver process to appear ...
	I0802 19:13:29.856622   75193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 19:13:29.871247   75193 api_server.go:72] duration metric: took 41.481624333s to wait for apiserver process to appear ...
	I0802 19:13:29.871280   75193 api_server.go:88] waiting for apiserver healthz status ...
	I0802 19:13:29.871303   75193 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0802 19:13:29.876697   75193 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0802 19:13:29.877685   75193 api_server.go:141] control plane version: v1.30.3
	I0802 19:13:29.877709   75193 api_server.go:131] duration metric: took 6.422138ms to wait for apiserver health ...
	I0802 19:13:29.877718   75193 system_pods.go:43] waiting for kube-system pods to appear ...
	I0802 19:13:30.059207   75193 system_pods.go:59] 7 kube-system pods found
	I0802 19:13:30.059251   75193 system_pods.go:61] "coredns-7db6d8ff4d-7v5ln" [f2d90271-99be-4660-90b0-e0d49cb8164e] Running
	I0802 19:13:30.059259   75193 system_pods.go:61] "etcd-bridge-800809" [5f7b9afb-9447-4e87-b927-ad75682d760a] Running
	I0802 19:13:30.059263   75193 system_pods.go:61] "kube-apiserver-bridge-800809" [6875e96d-bd0f-4435-b4eb-8f84f1c886df] Running
	I0802 19:13:30.059266   75193 system_pods.go:61] "kube-controller-manager-bridge-800809" [e19b2da5-715c-4306-9318-7c06ffe02503] Running
	I0802 19:13:30.059270   75193 system_pods.go:61] "kube-proxy-sg47p" [3b228ae6-c57f-46a8-837e-ebbc3249048a] Running
	I0802 19:13:30.059273   75193 system_pods.go:61] "kube-scheduler-bridge-800809" [ce6e55e7-132f-4ccf-a483-f363617e6964] Running
	I0802 19:13:30.059276   75193 system_pods.go:61] "storage-provisioner" [e235ec51-a2c4-4df0-8714-0b0268979e99] Running
	I0802 19:13:30.059282   75193 system_pods.go:74] duration metric: took 181.557889ms to wait for pod list to return data ...
	I0802 19:13:30.059289   75193 default_sa.go:34] waiting for default service account to be created ...
	I0802 19:13:30.255438   75193 default_sa.go:45] found service account: "default"
	I0802 19:13:30.255462   75193 default_sa.go:55] duration metric: took 196.167232ms for default service account to be created ...
	I0802 19:13:30.255469   75193 system_pods.go:116] waiting for k8s-apps to be running ...
	I0802 19:13:30.458752   75193 system_pods.go:86] 7 kube-system pods found
	I0802 19:13:30.458779   75193 system_pods.go:89] "coredns-7db6d8ff4d-7v5ln" [f2d90271-99be-4660-90b0-e0d49cb8164e] Running
	I0802 19:13:30.458786   75193 system_pods.go:89] "etcd-bridge-800809" [5f7b9afb-9447-4e87-b927-ad75682d760a] Running
	I0802 19:13:30.458791   75193 system_pods.go:89] "kube-apiserver-bridge-800809" [6875e96d-bd0f-4435-b4eb-8f84f1c886df] Running
	I0802 19:13:30.458795   75193 system_pods.go:89] "kube-controller-manager-bridge-800809" [e19b2da5-715c-4306-9318-7c06ffe02503] Running
	I0802 19:13:30.458799   75193 system_pods.go:89] "kube-proxy-sg47p" [3b228ae6-c57f-46a8-837e-ebbc3249048a] Running
	I0802 19:13:30.458803   75193 system_pods.go:89] "kube-scheduler-bridge-800809" [ce6e55e7-132f-4ccf-a483-f363617e6964] Running
	I0802 19:13:30.458806   75193 system_pods.go:89] "storage-provisioner" [e235ec51-a2c4-4df0-8714-0b0268979e99] Running
	I0802 19:13:30.458815   75193 system_pods.go:126] duration metric: took 203.338605ms to wait for k8s-apps to be running ...
	I0802 19:13:30.458824   75193 system_svc.go:44] waiting for kubelet service to be running ....
	I0802 19:13:30.458883   75193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 19:13:30.474400   75193 system_svc.go:56] duration metric: took 15.566458ms WaitForService to wait for kubelet
	I0802 19:13:30.474437   75193 kubeadm.go:582] duration metric: took 42.084819474s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0802 19:13:30.474461   75193 node_conditions.go:102] verifying NodePressure condition ...
	I0802 19:13:30.656106   75193 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0802 19:13:30.656134   75193 node_conditions.go:123] node cpu capacity is 2
	I0802 19:13:30.656145   75193 node_conditions.go:105] duration metric: took 181.678537ms to run NodePressure ...
	I0802 19:13:30.656156   75193 start.go:241] waiting for startup goroutines ...
	I0802 19:13:30.656162   75193 start.go:246] waiting for cluster config update ...
	I0802 19:13:30.656171   75193 start.go:255] writing updated cluster config ...
	I0802 19:13:30.656438   75193 ssh_runner.go:195] Run: rm -f paused
	I0802 19:13:30.702455   75193 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0802 19:13:30.704336   75193 out.go:177] * Done! kubectl is now configured to use "bridge-800809" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.870470475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722626449870449677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=255ef2f7-956d-4e2d-8652-b0d0ea897aae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.871007932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40990465-1f16-4b51-91e0-6535852f68cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.871100782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40990465-1f16-4b51-91e0-6535852f68cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.871323073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd26613a29e0f2c874d9091019f6fdc7e5d3931e62918e9a6b02299bd15a6aa4,PodSandboxId:ae968924856f7f8ac1fce76b0ec17def939cc09d9b5aa5a6fdea5117efbc9475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722625532187295375,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3300a13-9ee5-4eeb-9e21-9ef40aad1379,},Annotations:map[string]string{io.kubernetes.container.hash: 8bdc195f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1d5601411c8a850e29d2f7f151a5a2ddf65ab801a0f1cbb421a881cc9bf2f,PodSandboxId:c43cc07a8b6a531382f2190d503ccb3d565af979300ae05e33994f464934de61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625532017402644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bm67n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97410089-9b08-4ea7-9636-ce635935858f,},Annotations:map[string]string{io.kubernetes.container.hash: 9f62d51e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c2bf68ade767841e843b9c339671d2090d2c096c0d784bd8f13d1d367b8b18,PodSandboxId:41e6c4c44f01ab6c95da23de3109c1f369c25f65a60d27f59bbbf7ee3a9d4747,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625531662344311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rfg9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
511162d-2bd2-490f-b789-925b904bd691,},Annotations:map[string]string{io.kubernetes.container.hash: f89db96e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c94d117258965a24491514da860bfeeac3202a6a361ab891faf59e6ea3ac6ab,PodSandboxId:1f757d0c569a7fce28e5e5ace66ac9228567c3ce750c74a00e42ac76d50a1879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722625531107472517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8w67s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d73c44-1601-4c2f-8399-259dbcd18813,},Annotations:map[string]string{io.kubernetes.container.hash: cd3cf495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cfa071ae25817cafbd2505e8cfca69119aecf5c2bda1137fe0f6a11b09725a3,PodSandboxId:7d52af71254c09fa83eb239f38a2d85f0b60c3bde73b6627ff3001382b3067cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722625512057524152,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f83ecf32e6a6ff7986504eae145875021a4e9599d9c6d31f135e4b64ba27e7,PodSandboxId:c3ca7a780f4698af389defbc12929295001bde7b644aa61fdc53a8a5173af302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722625512037909322,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321b35ee4ad27dd1b67cecf2104fbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2cf313b754a84629c4ddf416bfaea4b6805f308c29cfa47cad78617e76bed0,PodSandboxId:f482ad4efa6dd1153354cb61b2c39f490fb370bc4bd2061d7ea325ba7b5887b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722625512012480354,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594ee16db5c0e78927c7ad037e6e2041,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b59df5d65404eaf063c736b81c1ee9c11b1d04e05b64f39e420d188a563cc,PodSandboxId:d09c466332c2a7a93ce2632326150ed82662ffd37b6a3f64b0f8ba18776ab624,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722625511951198931,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b3185a2e6b2067998f176e6f7519a8,},Annotations:map[string]string{io.kubernetes.container.hash: 90f9c977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aa928d29603a61a99235b6879de943269bca61e4cd2b0573280d3158b18e63,PodSandboxId:7eb934b7d29e5d7a409a3dcb21eea0a0b7ac97eb107959fae2fb1481679816fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722625219318277589,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40990465-1f16-4b51-91e0-6535852f68cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.905298758Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1411592-e883-4529-a46c-a5927e05405b name=/runtime.v1.RuntimeService/Version
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.905380928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1411592-e883-4529-a46c-a5927e05405b name=/runtime.v1.RuntimeService/Version
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.906371758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5624562-1d3b-4c43-9ed9-bb05880f5a6d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.906780201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722626449906756505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5624562-1d3b-4c43-9ed9-bb05880f5a6d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.907238571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f08bcbf-6423-4e87-a6ad-74c3e8691926 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.907299845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f08bcbf-6423-4e87-a6ad-74c3e8691926 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.907500146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd26613a29e0f2c874d9091019f6fdc7e5d3931e62918e9a6b02299bd15a6aa4,PodSandboxId:ae968924856f7f8ac1fce76b0ec17def939cc09d9b5aa5a6fdea5117efbc9475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722625532187295375,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3300a13-9ee5-4eeb-9e21-9ef40aad1379,},Annotations:map[string]string{io.kubernetes.container.hash: 8bdc195f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1d5601411c8a850e29d2f7f151a5a2ddf65ab801a0f1cbb421a881cc9bf2f,PodSandboxId:c43cc07a8b6a531382f2190d503ccb3d565af979300ae05e33994f464934de61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625532017402644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bm67n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97410089-9b08-4ea7-9636-ce635935858f,},Annotations:map[string]string{io.kubernetes.container.hash: 9f62d51e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c2bf68ade767841e843b9c339671d2090d2c096c0d784bd8f13d1d367b8b18,PodSandboxId:41e6c4c44f01ab6c95da23de3109c1f369c25f65a60d27f59bbbf7ee3a9d4747,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625531662344311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rfg9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
511162d-2bd2-490f-b789-925b904bd691,},Annotations:map[string]string{io.kubernetes.container.hash: f89db96e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c94d117258965a24491514da860bfeeac3202a6a361ab891faf59e6ea3ac6ab,PodSandboxId:1f757d0c569a7fce28e5e5ace66ac9228567c3ce750c74a00e42ac76d50a1879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722625531107472517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8w67s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d73c44-1601-4c2f-8399-259dbcd18813,},Annotations:map[string]string{io.kubernetes.container.hash: cd3cf495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cfa071ae25817cafbd2505e8cfca69119aecf5c2bda1137fe0f6a11b09725a3,PodSandboxId:7d52af71254c09fa83eb239f38a2d85f0b60c3bde73b6627ff3001382b3067cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722625512057524152,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f83ecf32e6a6ff7986504eae145875021a4e9599d9c6d31f135e4b64ba27e7,PodSandboxId:c3ca7a780f4698af389defbc12929295001bde7b644aa61fdc53a8a5173af302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722625512037909322,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321b35ee4ad27dd1b67cecf2104fbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2cf313b754a84629c4ddf416bfaea4b6805f308c29cfa47cad78617e76bed0,PodSandboxId:f482ad4efa6dd1153354cb61b2c39f490fb370bc4bd2061d7ea325ba7b5887b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722625512012480354,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594ee16db5c0e78927c7ad037e6e2041,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b59df5d65404eaf063c736b81c1ee9c11b1d04e05b64f39e420d188a563cc,PodSandboxId:d09c466332c2a7a93ce2632326150ed82662ffd37b6a3f64b0f8ba18776ab624,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722625511951198931,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b3185a2e6b2067998f176e6f7519a8,},Annotations:map[string]string{io.kubernetes.container.hash: 90f9c977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aa928d29603a61a99235b6879de943269bca61e4cd2b0573280d3158b18e63,PodSandboxId:7eb934b7d29e5d7a409a3dcb21eea0a0b7ac97eb107959fae2fb1481679816fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722625219318277589,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f08bcbf-6423-4e87-a6ad-74c3e8691926 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.948458795Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3dec63b5-599b-4b62-b869-bbe825064243 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.948531813Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3dec63b5-599b-4b62-b869-bbe825064243 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.949718450Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=644dc570-fcd0-441b-ad33-bfbc378f6ae1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.950193756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722626449950170062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=644dc570-fcd0-441b-ad33-bfbc378f6ae1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.950760757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8f168a2-0b3b-4622-a66e-e533e5014ac2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.950820184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8f168a2-0b3b-4622-a66e-e533e5014ac2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.951027679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd26613a29e0f2c874d9091019f6fdc7e5d3931e62918e9a6b02299bd15a6aa4,PodSandboxId:ae968924856f7f8ac1fce76b0ec17def939cc09d9b5aa5a6fdea5117efbc9475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722625532187295375,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3300a13-9ee5-4eeb-9e21-9ef40aad1379,},Annotations:map[string]string{io.kubernetes.container.hash: 8bdc195f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1d5601411c8a850e29d2f7f151a5a2ddf65ab801a0f1cbb421a881cc9bf2f,PodSandboxId:c43cc07a8b6a531382f2190d503ccb3d565af979300ae05e33994f464934de61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625532017402644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bm67n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97410089-9b08-4ea7-9636-ce635935858f,},Annotations:map[string]string{io.kubernetes.container.hash: 9f62d51e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c2bf68ade767841e843b9c339671d2090d2c096c0d784bd8f13d1d367b8b18,PodSandboxId:41e6c4c44f01ab6c95da23de3109c1f369c25f65a60d27f59bbbf7ee3a9d4747,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625531662344311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rfg9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
511162d-2bd2-490f-b789-925b904bd691,},Annotations:map[string]string{io.kubernetes.container.hash: f89db96e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c94d117258965a24491514da860bfeeac3202a6a361ab891faf59e6ea3ac6ab,PodSandboxId:1f757d0c569a7fce28e5e5ace66ac9228567c3ce750c74a00e42ac76d50a1879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722625531107472517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8w67s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d73c44-1601-4c2f-8399-259dbcd18813,},Annotations:map[string]string{io.kubernetes.container.hash: cd3cf495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cfa071ae25817cafbd2505e8cfca69119aecf5c2bda1137fe0f6a11b09725a3,PodSandboxId:7d52af71254c09fa83eb239f38a2d85f0b60c3bde73b6627ff3001382b3067cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722625512057524152,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f83ecf32e6a6ff7986504eae145875021a4e9599d9c6d31f135e4b64ba27e7,PodSandboxId:c3ca7a780f4698af389defbc12929295001bde7b644aa61fdc53a8a5173af302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722625512037909322,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321b35ee4ad27dd1b67cecf2104fbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2cf313b754a84629c4ddf416bfaea4b6805f308c29cfa47cad78617e76bed0,PodSandboxId:f482ad4efa6dd1153354cb61b2c39f490fb370bc4bd2061d7ea325ba7b5887b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722625512012480354,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594ee16db5c0e78927c7ad037e6e2041,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b59df5d65404eaf063c736b81c1ee9c11b1d04e05b64f39e420d188a563cc,PodSandboxId:d09c466332c2a7a93ce2632326150ed82662ffd37b6a3f64b0f8ba18776ab624,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722625511951198931,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b3185a2e6b2067998f176e6f7519a8,},Annotations:map[string]string{io.kubernetes.container.hash: 90f9c977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aa928d29603a61a99235b6879de943269bca61e4cd2b0573280d3158b18e63,PodSandboxId:7eb934b7d29e5d7a409a3dcb21eea0a0b7ac97eb107959fae2fb1481679816fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722625219318277589,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8f168a2-0b3b-4622-a66e-e533e5014ac2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.980984566Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8dfe9a2-95d1-46b0-aaea-d1345abe35f2 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.981126239Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8dfe9a2-95d1-46b0-aaea-d1345abe35f2 name=/runtime.v1.RuntimeService/Version
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.982266221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=776f317d-3537-46aa-a173-b6ae7c1cce0c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.982665138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722626449982644215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=776f317d-3537-46aa-a173-b6ae7c1cce0c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.983592775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab4e5334-3a14-462c-b236-9b9287cef1bc name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.983644310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab4e5334-3a14-462c-b236-9b9287cef1bc name=/runtime.v1.RuntimeService/ListContainers
	Aug 02 19:20:49 embed-certs-757654 crio[723]: time="2024-08-02 19:20:49.983841505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd26613a29e0f2c874d9091019f6fdc7e5d3931e62918e9a6b02299bd15a6aa4,PodSandboxId:ae968924856f7f8ac1fce76b0ec17def939cc09d9b5aa5a6fdea5117efbc9475,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722625532187295375,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d3300a13-9ee5-4eeb-9e21-9ef40aad1379,},Annotations:map[string]string{io.kubernetes.container.hash: 8bdc195f,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3a1d5601411c8a850e29d2f7f151a5a2ddf65ab801a0f1cbb421a881cc9bf2f,PodSandboxId:c43cc07a8b6a531382f2190d503ccb3d565af979300ae05e33994f464934de61,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625532017402644,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bm67n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97410089-9b08-4ea7-9636-ce635935858f,},Annotations:map[string]string{io.kubernetes.container.hash: 9f62d51e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c2bf68ade767841e843b9c339671d2090d2c096c0d784bd8f13d1d367b8b18,PodSandboxId:41e6c4c44f01ab6c95da23de3109c1f369c25f65a60d27f59bbbf7ee3a9d4747,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722625531662344311,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rfg9v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
511162d-2bd2-490f-b789-925b904bd691,},Annotations:map[string]string{io.kubernetes.container.hash: f89db96e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c94d117258965a24491514da860bfeeac3202a6a361ab891faf59e6ea3ac6ab,PodSandboxId:1f757d0c569a7fce28e5e5ace66ac9228567c3ce750c74a00e42ac76d50a1879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt
:1722625531107472517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8w67s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d73c44-1601-4c2f-8399-259dbcd18813,},Annotations:map[string]string{io.kubernetes.container.hash: cd3cf495,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cfa071ae25817cafbd2505e8cfca69119aecf5c2bda1137fe0f6a11b09725a3,PodSandboxId:7d52af71254c09fa83eb239f38a2d85f0b60c3bde73b6627ff3001382b3067cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722625512057524152,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85f83ecf32e6a6ff7986504eae145875021a4e9599d9c6d31f135e4b64ba27e7,PodSandboxId:c3ca7a780f4698af389defbc12929295001bde7b644aa61fdc53a8a5173af302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722625512037909322,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3321b35ee4ad27dd1b67cecf2104fbae,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f2cf313b754a84629c4ddf416bfaea4b6805f308c29cfa47cad78617e76bed0,PodSandboxId:f482ad4efa6dd1153354cb61b2c39f490fb370bc4bd2061d7ea325ba7b5887b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722625512012480354,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594ee16db5c0e78927c7ad037e6e2041,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:215b59df5d65404eaf063c736b81c1ee9c11b1d04e05b64f39e420d188a563cc,PodSandboxId:d09c466332c2a7a93ce2632326150ed82662ffd37b6a3f64b0f8ba18776ab624,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722625511951198931,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b3185a2e6b2067998f176e6f7519a8,},Annotations:map[string]string{io.kubernetes.container.hash: 90f9c977,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7aa928d29603a61a99235b6879de943269bca61e4cd2b0573280d3158b18e63,PodSandboxId:7eb934b7d29e5d7a409a3dcb21eea0a0b7ac97eb107959fae2fb1481679816fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722625219318277589,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-757654,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff13ab4ff59bb3cca6dc035577ba4b5,},Annotations:map[string]string{io.kubernetes.container.hash: aa68a6c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab4e5334-3a14-462c-b236-9b9287cef1bc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cd26613a29e0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   ae968924856f7       storage-provisioner
	e3a1d5601411c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   c43cc07a8b6a5       coredns-7db6d8ff4d-bm67n
	99c2bf68ade76       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   41e6c4c44f01a       coredns-7db6d8ff4d-rfg9v
	1c94d11725896       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   15 minutes ago      Running             kube-proxy                0                   1f757d0c569a7       kube-proxy-8w67s
	8cfa071ae2581       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   15 minutes ago      Running             kube-apiserver            2                   7d52af71254c0       kube-apiserver-embed-certs-757654
	85f83ecf32e6a       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   15 minutes ago      Running             kube-scheduler            2                   c3ca7a780f469       kube-scheduler-embed-certs-757654
	6f2cf313b754a       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   15 minutes ago      Running             kube-controller-manager   2                   f482ad4efa6dd       kube-controller-manager-embed-certs-757654
	215b59df5d654       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   d09c466332c2a       etcd-embed-certs-757654
	b7aa928d29603       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   20 minutes ago      Exited              kube-apiserver            1                   7eb934b7d29e5       kube-apiserver-embed-certs-757654
	
	
	==> coredns [99c2bf68ade767841e843b9c339671d2090d2c096c0d784bd8f13d1d367b8b18] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e3a1d5601411c8a850e29d2f7f151a5a2ddf65ab801a0f1cbb421a881cc9bf2f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-757654
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-757654
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=db72189ad8010dba8f92a33c09569de9ae45dca9
	                    minikube.k8s.io/name=embed-certs-757654
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_02T19_05_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 02 Aug 2024 19:05:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-757654
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 02 Aug 2024 19:20:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 02 Aug 2024 19:15:50 +0000   Fri, 02 Aug 2024 19:05:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 02 Aug 2024 19:15:50 +0000   Fri, 02 Aug 2024 19:05:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 02 Aug 2024 19:15:50 +0000   Fri, 02 Aug 2024 19:05:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 02 Aug 2024 19:15:50 +0000   Fri, 02 Aug 2024 19:05:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.74
	  Hostname:    embed-certs-757654
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ffd2022e12cc44c49e899fab8e76d6ac
	  System UUID:                ffd2022e-12cc-44c4-9e89-9fab8e76d6ac
	  Boot ID:                    537e9d85-e3aa-4e14-8a47-e5da258ba33d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-bm67n                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-rfg9v                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-757654                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-757654             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-757654    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-8w67s                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-757654             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-d69sk               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-757654 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-757654 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-757654 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-757654 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-757654 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-757654 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-757654 event: Registered Node embed-certs-757654 in Controller
	
	
	==> dmesg <==
	[  +0.052063] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037497] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.751175] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.868325] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Aug 2 19:00] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.890168] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.061366] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060401] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.193601] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.123048] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.270464] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +4.013981] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +2.084419] systemd-fstab-generator[926]: Ignoring "noauto" option for root device
	[  +0.071000] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.530882] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.495351] kauditd_printk_skb: 79 callbacks suppressed
	[Aug 2 19:05] systemd-fstab-generator[3600]: Ignoring "noauto" option for root device
	[  +0.070800] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.486307] systemd-fstab-generator[3925]: Ignoring "noauto" option for root device
	[  +0.079417] kauditd_printk_skb: 54 callbacks suppressed
	[ +13.749009] systemd-fstab-generator[4118]: Ignoring "noauto" option for root device
	[  +0.083645] kauditd_printk_skb: 12 callbacks suppressed
	[Aug 2 19:06] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [215b59df5d65404eaf063c736b81c1ee9c11b1d04e05b64f39e420d188a563cc] <==
	{"level":"warn","ts":"2024-08-02T19:10:37.661284Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.539001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T19:10:37.661338Z","caller":"traceutil/trace.go:171","msg":"trace[308625752] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:741; }","duration":"162.6743ms","start":"2024-08-02T19:10:37.498648Z","end":"2024-08-02T19:10:37.661322Z","steps":["trace[308625752] 'agreement among raft nodes before linearized reading'  (duration: 162.512891ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:10:37.661586Z","caller":"traceutil/trace.go:171","msg":"trace[118216297] transaction","detail":"{read_only:false; response_revision:741; number_of_response:1; }","duration":"175.275652ms","start":"2024-08-02T19:10:37.486277Z","end":"2024-08-02T19:10:37.661553Z","steps":["trace[118216297] 'process raft request'  (duration: 174.659394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:11:10.324464Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"317.131182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T19:11:10.324544Z","caller":"traceutil/trace.go:171","msg":"trace[1625272260] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:767; }","duration":"317.260794ms","start":"2024-08-02T19:11:10.00727Z","end":"2024-08-02T19:11:10.324531Z","steps":["trace[1625272260] 'count revisions from in-memory index tree'  (duration: 317.059543ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:11:10.324592Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-02T19:11:10.007255Z","time spent":"317.321931ms","remote":"127.0.0.1:51926","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":28,"request content":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true "}
	{"level":"info","ts":"2024-08-02T19:11:11.604004Z","caller":"traceutil/trace.go:171","msg":"trace[818783036] linearizableReadLoop","detail":"{readStateIndex:848; appliedIndex:847; }","duration":"113.620833ms","start":"2024-08-02T19:11:11.490366Z","end":"2024-08-02T19:11:11.603987Z","steps":["trace[818783036] 'read index received'  (duration: 46.937532ms)","trace[818783036] 'applied index is now lower than readState.Index'  (duration: 66.682458ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-02T19:11:11.604226Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.841863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T19:11:11.604405Z","caller":"traceutil/trace.go:171","msg":"trace[548918230] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:767; }","duration":"114.054495ms","start":"2024-08-02T19:11:11.490339Z","end":"2024-08-02T19:11:11.604394Z","steps":["trace[548918230] 'agreement among raft nodes before linearized reading'  (duration: 113.833282ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:11:11.604361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.07225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-02T19:11:11.60486Z","caller":"traceutil/trace.go:171","msg":"trace[882510175] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:767; }","duration":"106.557551ms","start":"2024-08-02T19:11:11.498279Z","end":"2024-08-02T19:11:11.604837Z","steps":["trace[882510175] 'agreement among raft nodes before linearized reading'  (duration: 106.061843ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:11:26.278989Z","caller":"traceutil/trace.go:171","msg":"trace[1944138405] transaction","detail":"{read_only:false; response_revision:782; number_of_response:1; }","duration":"141.34292ms","start":"2024-08-02T19:11:26.137633Z","end":"2024-08-02T19:11:26.278976Z","steps":["trace[1944138405] 'process raft request'  (duration: 141.292814ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:11:26.279341Z","caller":"traceutil/trace.go:171","msg":"trace[683198444] transaction","detail":"{read_only:false; response_revision:781; number_of_response:1; }","duration":"145.023921ms","start":"2024-08-02T19:11:26.134301Z","end":"2024-08-02T19:11:26.279325Z","steps":["trace[683198444] 'process raft request'  (duration: 113.416914ms)","trace[683198444] 'compare'  (duration: 31.113487ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-02T19:12:24.837447Z","caller":"traceutil/trace.go:171","msg":"trace[852352281] transaction","detail":"{read_only:false; response_revision:827; number_of_response:1; }","duration":"196.383204ms","start":"2024-08-02T19:12:24.641024Z","end":"2024-08-02T19:12:24.837408Z","steps":["trace[852352281] 'process raft request'  (duration: 195.876576ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:12:25.110286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.187239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-02T19:12:25.110357Z","caller":"traceutil/trace.go:171","msg":"trace[1516960778] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:827; }","duration":"121.340705ms","start":"2024-08-02T19:12:24.989001Z","end":"2024-08-02T19:12:25.110341Z","steps":["trace[1516960778] 'count revisions from in-memory index tree'  (duration: 121.096566ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-02T19:12:26.290318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.453722ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.72.74\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-08-02T19:12:26.290506Z","caller":"traceutil/trace.go:171","msg":"trace[1341978625] range","detail":"{range_begin:/registry/masterleases/192.168.72.74; range_end:; response_count:1; response_revision:828; }","duration":"240.672986ms","start":"2024-08-02T19:12:26.049813Z","end":"2024-08-02T19:12:26.290486Z","steps":["trace[1341978625] 'range keys from in-memory index tree'  (duration: 240.319777ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:12:27.143813Z","caller":"traceutil/trace.go:171","msg":"trace[1781765146] transaction","detail":"{read_only:false; response_revision:831; number_of_response:1; }","duration":"179.543675ms","start":"2024-08-02T19:12:26.964235Z","end":"2024-08-02T19:12:27.143779Z","steps":["trace[1781765146] 'process raft request'  (duration: 179.364885ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-02T19:15:13.01841Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":720}
	{"level":"info","ts":"2024-08-02T19:15:13.027348Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":720,"took":"8.368306ms","hash":2139926935,"current-db-size-bytes":2367488,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2367488,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-08-02T19:15:13.027441Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2139926935,"revision":720,"compact-revision":-1}
	{"level":"info","ts":"2024-08-02T19:20:13.025461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":962}
	{"level":"info","ts":"2024-08-02T19:20:13.029607Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":962,"took":"3.637127ms","hash":3166861053,"current-db-size-bytes":2367488,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1572864,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-02T19:20:13.029697Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3166861053,"revision":962,"compact-revision":720}
	
	
	==> kernel <==
	 19:20:50 up 20 min,  0 users,  load average: 0.02, 0.06, 0.04
	Linux embed-certs-757654 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8cfa071ae25817cafbd2505e8cfca69119aecf5c2bda1137fe0f6a11b09725a3] <==
	I0802 19:15:15.425119       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:16:15.424269       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:16:15.424550       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 19:16:15.424586       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:16:15.425425       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:16:15.425498       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 19:16:15.426649       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:18:15.425608       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:18:15.425887       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0802 19:18:15.425916       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:18:15.427187       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:18:15.427348       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 19:18:15.427385       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0802 19:20:14.429918       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:20:14.430045       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0802 19:20:15.431173       1 handler_proxy.go:93] no RequestInfo found in the context
	E0802 19:20:15.431317       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	W0802 19:20:15.431174       1 handler_proxy.go:93] no RequestInfo found in the context
	I0802 19:20:15.431348       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0802 19:20:15.431439       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0802 19:20:15.433389       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b7aa928d29603a61a99235b6879de943269bca61e4cd2b0573280d3158b18e63] <==
	W0802 19:05:05.641487       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.641487       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.667854       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.748545       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.753340       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.758146       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.762807       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.947403       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.982762       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:05.993160       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.041404       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.140395       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.165763       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.224930       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.246713       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.334455       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.368580       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.408219       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.410751       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.526717       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.868034       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:06.871608       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:07.023805       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:07.255873       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0802 19:05:07.403161       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6f2cf313b754a84629c4ddf416bfaea4b6805f308c29cfa47cad78617e76bed0] <==
	I0802 19:15:00.594391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:15:30.107645       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:15:30.602504       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:16:00.112580       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:16:00.609992       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:16:30.117477       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:16:30.617671       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0802 19:16:30.756639       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="158.615µs"
	I0802 19:16:41.752422       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="122.933µs"
	E0802 19:17:00.121870       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:17:00.625577       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:17:30.126929       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:17:30.633385       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:18:00.132265       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:18:00.641287       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:18:30.137737       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:18:30.650569       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:19:00.142960       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:19:00.657536       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:19:30.148687       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:19:30.666011       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:20:00.154641       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:20:00.674519       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0802 19:20:30.159826       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0802 19:20:30.682656       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1c94d117258965a24491514da860bfeeac3202a6a361ab891faf59e6ea3ac6ab] <==
	I0802 19:05:31.622489       1 server_linux.go:69] "Using iptables proxy"
	I0802 19:05:31.654969       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.74"]
	I0802 19:05:31.960766       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0802 19:05:31.960928       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0802 19:05:31.960999       1 server_linux.go:165] "Using iptables Proxier"
	I0802 19:05:31.976944       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0802 19:05:31.992934       1 server.go:872] "Version info" version="v1.30.3"
	I0802 19:05:31.992958       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0802 19:05:32.020197       1 config.go:192] "Starting service config controller"
	I0802 19:05:32.026938       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0802 19:05:32.027139       1 config.go:101] "Starting endpoint slice config controller"
	I0802 19:05:32.027170       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0802 19:05:32.068952       1 config.go:319] "Starting node config controller"
	I0802 19:05:32.069354       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0802 19:05:32.169940       1 shared_informer.go:320] Caches are synced for node config
	I0802 19:05:32.228282       1 shared_informer.go:320] Caches are synced for service config
	I0802 19:05:32.228371       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [85f83ecf32e6a6ff7986504eae145875021a4e9599d9c6d31f135e4b64ba27e7] <==
	W0802 19:05:14.423272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 19:05:14.423295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 19:05:15.286917       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0802 19:05:15.287117       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0802 19:05:15.344829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0802 19:05:15.344948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0802 19:05:15.352248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0802 19:05:15.352292       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0802 19:05:15.476564       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0802 19:05:15.476617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0802 19:05:15.507701       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0802 19:05:15.507756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0802 19:05:15.639854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0802 19:05:15.639921       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0802 19:05:15.667392       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0802 19:05:15.667437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0802 19:05:15.688782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0802 19:05:15.688843       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0802 19:05:15.724601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0802 19:05:15.724650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0802 19:05:15.728026       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0802 19:05:15.728100       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0802 19:05:15.878717       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0802 19:05:15.878760       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0802 19:05:19.015327       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 02 19:18:16 embed-certs-757654 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 19:18:16 embed-certs-757654 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 19:18:16 embed-certs-757654 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 19:18:16 embed-certs-757654 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:18:28 embed-certs-757654 kubelet[3932]: E0802 19:18:28.743016    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:18:43 embed-certs-757654 kubelet[3932]: E0802 19:18:43.739157    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:18:54 embed-certs-757654 kubelet[3932]: E0802 19:18:54.740489    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:19:07 embed-certs-757654 kubelet[3932]: E0802 19:19:07.739841    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:19:16 embed-certs-757654 kubelet[3932]: E0802 19:19:16.755724    3932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 19:19:16 embed-certs-757654 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 19:19:16 embed-certs-757654 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 19:19:16 embed-certs-757654 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 19:19:16 embed-certs-757654 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:19:19 embed-certs-757654 kubelet[3932]: E0802 19:19:19.739708    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:19:31 embed-certs-757654 kubelet[3932]: E0802 19:19:31.739132    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:19:46 embed-certs-757654 kubelet[3932]: E0802 19:19:46.740670    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:19:58 embed-certs-757654 kubelet[3932]: E0802 19:19:58.739633    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:20:13 embed-certs-757654 kubelet[3932]: E0802 19:20:13.739849    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:20:16 embed-certs-757654 kubelet[3932]: E0802 19:20:16.754213    3932 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 02 19:20:16 embed-certs-757654 kubelet[3932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 02 19:20:16 embed-certs-757654 kubelet[3932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 02 19:20:16 embed-certs-757654 kubelet[3932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 02 19:20:16 embed-certs-757654 kubelet[3932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 02 19:20:26 embed-certs-757654 kubelet[3932]: E0802 19:20:26.740909    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	Aug 02 19:20:41 embed-certs-757654 kubelet[3932]: E0802 19:20:41.739556    3932 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-d69sk" podUID="4d7a8428-5611-44a4-93a7-4440735668f8"
	
	
	==> storage-provisioner [cd26613a29e0f2c874d9091019f6fdc7e5d3931e62918e9a6b02299bd15a6aa4] <==
	I0802 19:05:32.316581       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0802 19:05:32.347384       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0802 19:05:32.347610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0802 19:05:32.363635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0802 19:05:32.363937       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-757654_f4afbed8-bed6-4205-87b7-420fc016cfb8!
	I0802 19:05:32.364891       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8c60b342-6881-433c-974d-f7f6e4dc832f", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-757654_f4afbed8-bed6-4205-87b7-420fc016cfb8 became leader
	I0802 19:05:32.467438       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-757654_f4afbed8-bed6-4205-87b7-420fc016cfb8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-757654 -n embed-certs-757654
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-757654 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-d69sk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-757654 describe pod metrics-server-569cc877fc-d69sk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-757654 describe pod metrics-server-569cc877fc-d69sk: exit status 1 (59.43241ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-d69sk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-757654 describe pod metrics-server-569cc877fc-d69sk: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (376.43s)

                                                
                                    

Test pass (249/322)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.46
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 17.35
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.85
18 TestDownloadOnly/v1.30.3/DeleteAll 0.12
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-rc.0/json-events 48.59
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.05
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.55
31 TestOffline 116.82
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
36 TestAddons/Setup 142.97
40 TestAddons/serial/GCPAuth/Namespaces 0.14
42 TestAddons/parallel/Registry 16.52
44 TestAddons/parallel/InspektorGadget 11.78
46 TestAddons/parallel/HelmTiller 11.35
48 TestAddons/parallel/CSI 68.94
49 TestAddons/parallel/Headlamp 18.57
50 TestAddons/parallel/CloudSpanner 6.52
51 TestAddons/parallel/LocalPath 12.14
52 TestAddons/parallel/NvidiaDevicePlugin 5.5
53 TestAddons/parallel/Yakd 10.7
55 TestCertOptions 48.23
56 TestCertExpiration 266.74
58 TestForceSystemdFlag 52.56
59 TestForceSystemdEnv 42.06
61 TestKVMDriverInstallOrUpdate 3.71
65 TestErrorSpam/setup 40.37
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.7
68 TestErrorSpam/pause 1.51
69 TestErrorSpam/unpause 1.57
70 TestErrorSpam/stop 4.35
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 56.59
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 37.57
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.81
82 TestFunctional/serial/CacheCmd/cache/add_local 2
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 38.26
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.44
93 TestFunctional/serial/LogsFileCmd 1.4
94 TestFunctional/serial/InvalidService 4.17
96 TestFunctional/parallel/ConfigCmd 0.31
97 TestFunctional/parallel/DashboardCmd 17.46
98 TestFunctional/parallel/DryRun 0.27
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 0.87
104 TestFunctional/parallel/ServiceCmdConnect 11.52
105 TestFunctional/parallel/AddonsCmd 0.11
106 TestFunctional/parallel/PersistentVolumeClaim 40.35
108 TestFunctional/parallel/SSHCmd 0.39
109 TestFunctional/parallel/CpCmd 1.31
110 TestFunctional/parallel/MySQL 27.3
111 TestFunctional/parallel/FileSync 0.21
112 TestFunctional/parallel/CertSync 1.28
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
120 TestFunctional/parallel/License 0.57
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
124 TestFunctional/parallel/Version/short 0.05
125 TestFunctional/parallel/Version/components 0.92
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.41
130 TestFunctional/parallel/ImageCommands/ImageBuild 5.36
131 TestFunctional/parallel/ImageCommands/Setup 1.77
132 TestFunctional/parallel/ServiceCmd/DeployApp 21.17
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.63
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.67
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
150 TestFunctional/parallel/ProfileCmd/profile_list 0.25
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
152 TestFunctional/parallel/MountCmd/any-port 18.64
153 TestFunctional/parallel/ServiceCmd/List 0.52
154 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
155 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
156 TestFunctional/parallel/ServiceCmd/Format 0.36
157 TestFunctional/parallel/ServiceCmd/URL 0.35
158 TestFunctional/parallel/MountCmd/specific-port 1.69
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.55
160 TestFunctional/delete_echo-server_images 0.03
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 206.98
167 TestMultiControlPlane/serial/DeployApp 6.04
168 TestMultiControlPlane/serial/PingHostFromPods 1.19
169 TestMultiControlPlane/serial/AddWorkerNode 55.41
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
172 TestMultiControlPlane/serial/CopyFile 12.4
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.46
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.38
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.12
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
181 TestMultiControlPlane/serial/RestartCluster 292.58
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 78.69
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
188 TestJSONOutput/start/Command 95.56
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.72
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.59
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 6.58
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 83.16
220 TestMountStart/serial/StartWithMountFirst 27.36
221 TestMountStart/serial/VerifyMountFirst 0.36
222 TestMountStart/serial/StartWithMountSecond 27.92
223 TestMountStart/serial/VerifyMountSecond 0.37
224 TestMountStart/serial/DeleteFirst 0.68
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 1.28
227 TestMountStart/serial/RestartStopped 23
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 117.07
232 TestMultiNode/serial/DeployApp2Nodes 5.22
233 TestMultiNode/serial/PingHostFrom2Pods 0.77
234 TestMultiNode/serial/AddNode 52.09
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.2
237 TestMultiNode/serial/CopyFile 6.94
238 TestMultiNode/serial/StopNode 2.17
239 TestMultiNode/serial/StartAfterStop 38.93
241 TestMultiNode/serial/DeleteNode 2.16
243 TestMultiNode/serial/RestartMultiNode 182.4
244 TestMultiNode/serial/ValidateNameConflict 42.12
251 TestScheduledStopUnix 110.92
255 TestRunningBinaryUpgrade 184.33
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
261 TestNoKubernetes/serial/StartWithK8s 88.22
262 TestNoKubernetes/serial/StartWithStopK8s 39.07
263 TestStoppedBinaryUpgrade/Setup 2.27
264 TestStoppedBinaryUpgrade/Upgrade 149.26
265 TestNoKubernetes/serial/Start 47.05
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
267 TestNoKubernetes/serial/ProfileList 9.81
276 TestPause/serial/Start 57.6
277 TestNoKubernetes/serial/Stop 1.49
278 TestNoKubernetes/serial/StartNoArgs 44.46
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
281 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
289 TestNetworkPlugins/group/false 2.87
296 TestStartStop/group/no-preload/serial/FirstStart 165.44
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 94.04
299 TestStartStop/group/no-preload/serial/DeployApp 10.27
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.02
309 TestStartStop/group/old-k8s-version/serial/Stop 1.31
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 537.33
315 TestStartStop/group/newest-cni/serial/FirstStart 289.01
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
319 TestStartStop/group/newest-cni/serial/Stop 10.51
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
321 TestStartStop/group/newest-cni/serial/SecondStart 37.11
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 1.07
325 TestStartStop/group/newest-cni/serial/Pause 2.89
327 TestStartStop/group/embed-certs/serial/FirstStart 95.53
328 TestStartStop/group/embed-certs/serial/DeployApp 10.28
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
333 TestStartStop/group/embed-certs/serial/SecondStart 625.26
339 TestNetworkPlugins/group/auto/Start 61.57
342 TestNetworkPlugins/group/kindnet/Start 89.74
343 TestNetworkPlugins/group/auto/KubeletFlags 0.2
344 TestNetworkPlugins/group/auto/NetCatPod 11.24
345 TestNetworkPlugins/group/auto/DNS 0.16
346 TestNetworkPlugins/group/auto/Localhost 0.12
347 TestNetworkPlugins/group/auto/HairPin 0.13
348 TestNetworkPlugins/group/calico/Start 85.01
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
351 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
352 TestNetworkPlugins/group/kindnet/DNS 0.17
353 TestNetworkPlugins/group/kindnet/Localhost 0.13
354 TestNetworkPlugins/group/kindnet/HairPin 0.14
355 TestNetworkPlugins/group/custom-flannel/Start 75.4
356 TestNetworkPlugins/group/enable-default-cni/Start 61.75
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/calico/KubeletFlags 0.2
359 TestNetworkPlugins/group/calico/NetCatPod 11.23
360 TestNetworkPlugins/group/calico/DNS 0.18
361 TestNetworkPlugins/group/calico/Localhost 0.14
362 TestNetworkPlugins/group/calico/HairPin 0.14
363 TestNetworkPlugins/group/flannel/Start 88.69
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
366 TestNetworkPlugins/group/custom-flannel/DNS 0.23
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
369 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
370 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.22
371 TestNetworkPlugins/group/bridge/Start 101.77
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
375 TestNetworkPlugins/group/flannel/ControllerPod 6.01
376 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
377 TestNetworkPlugins/group/flannel/NetCatPod 11.22
378 TestNetworkPlugins/group/flannel/DNS 0.16
379 TestNetworkPlugins/group/flannel/Localhost 0.12
380 TestNetworkPlugins/group/flannel/HairPin 0.12
381 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
382 TestNetworkPlugins/group/bridge/NetCatPod 11.23
383 TestNetworkPlugins/group/bridge/DNS 0.14
384 TestNetworkPlugins/group/bridge/Localhost 0.12
385 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (23.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-039015 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-039015 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.454913587s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-039015
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-039015: exit status 85 (52.725539ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-039015 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |          |
	|         | -p download-only-039015        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:26:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:26:18.952817   12558 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:26:18.952945   12558 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:18.952957   12558 out.go:304] Setting ErrFile to fd 2...
	I0802 17:26:18.952963   12558 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:18.953141   12558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	W0802 17:26:18.953255   12558 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19355-5397/.minikube/config/config.json: open /home/jenkins/minikube-integration/19355-5397/.minikube/config/config.json: no such file or directory
	I0802 17:26:18.953847   12558 out.go:298] Setting JSON to true
	I0802 17:26:18.954746   12558 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":523,"bootTime":1722619056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:26:18.954802   12558 start.go:139] virtualization: kvm guest
	I0802 17:26:18.957145   12558 out.go:97] [download-only-039015] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:26:18.957311   12558 notify.go:220] Checking for updates...
	W0802 17:26:18.957267   12558 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball: no such file or directory
	I0802 17:26:18.958684   12558 out.go:169] MINIKUBE_LOCATION=19355
	I0802 17:26:18.960079   12558 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:26:18.961375   12558 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:26:18.962511   12558 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:26:18.963837   12558 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0802 17:26:18.965978   12558 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0802 17:26:18.966178   12558 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:26:19.060889   12558 out.go:97] Using the kvm2 driver based on user configuration
	I0802 17:26:19.060915   12558 start.go:297] selected driver: kvm2
	I0802 17:26:19.060921   12558 start.go:901] validating driver "kvm2" against <nil>
	I0802 17:26:19.061240   12558 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:26:19.061363   12558 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:26:19.075976   12558 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:26:19.076032   12558 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 17:26:19.076519   12558 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0802 17:26:19.076677   12558 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 17:26:19.076727   12558 cni.go:84] Creating CNI manager for ""
	I0802 17:26:19.076738   12558 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 17:26:19.076747   12558 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 17:26:19.076791   12558 start.go:340] cluster config:
	{Name:download-only-039015 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-039015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:26:19.076961   12558 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:26:19.078800   12558 out.go:97] Downloading VM boot image ...
	I0802 17:26:19.078835   12558 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0802 17:26:29.170202   12558 out.go:97] Starting "download-only-039015" primary control-plane node in "download-only-039015" cluster
	I0802 17:26:29.170247   12558 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0802 17:26:29.264970   12558 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0802 17:26:29.265003   12558 cache.go:56] Caching tarball of preloaded images
	I0802 17:26:29.265165   12558 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0802 17:26:29.266834   12558 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0802 17:26:29.266850   12558 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0802 17:26:29.371650   12558 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-039015 host does not exist
	  To start a cluster, run: "minikube start -p download-only-039015"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-039015
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (17.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-380260 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-380260 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.348877466s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (17.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-380260
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-380260: exit status 85 (848.352573ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-039015 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-039015        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| delete  | -p download-only-039015        | download-only-039015 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| start   | -o=json --download-only        | download-only-380260 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-380260        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:26:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:26:42.706179   12816 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:26:42.706453   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:42.706469   12816 out.go:304] Setting ErrFile to fd 2...
	I0802 17:26:42.706540   12816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:26:42.707016   12816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:26:42.707940   12816 out.go:298] Setting JSON to true
	I0802 17:26:42.708798   12816 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":547,"bootTime":1722619056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:26:42.708856   12816 start.go:139] virtualization: kvm guest
	I0802 17:26:42.710679   12816 out.go:97] [download-only-380260] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:26:42.710797   12816 notify.go:220] Checking for updates...
	I0802 17:26:42.711959   12816 out.go:169] MINIKUBE_LOCATION=19355
	I0802 17:26:42.713228   12816 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:26:42.714425   12816 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:26:42.715717   12816 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:26:42.716959   12816 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0802 17:26:42.719420   12816 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0802 17:26:42.719654   12816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:26:42.750993   12816 out.go:97] Using the kvm2 driver based on user configuration
	I0802 17:26:42.751037   12816 start.go:297] selected driver: kvm2
	I0802 17:26:42.751042   12816 start.go:901] validating driver "kvm2" against <nil>
	I0802 17:26:42.751371   12816 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:26:42.751478   12816 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:26:42.765813   12816 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:26:42.765856   12816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 17:26:42.766309   12816 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0802 17:26:42.766446   12816 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 17:26:42.766493   12816 cni.go:84] Creating CNI manager for ""
	I0802 17:26:42.766505   12816 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 17:26:42.766512   12816 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 17:26:42.766560   12816 start.go:340] cluster config:
	{Name:download-only-380260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-380260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:26:42.766647   12816 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:26:42.768321   12816 out.go:97] Starting "download-only-380260" primary control-plane node in "download-only-380260" cluster
	I0802 17:26:42.768343   12816 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:26:43.291275   12816 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0802 17:26:43.291313   12816 cache.go:56] Caching tarball of preloaded images
	I0802 17:26:43.291483   12816 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0802 17:26:43.293323   12816 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0802 17:26:43.293336   12816 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0802 17:26:43.392346   12816 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-380260 host does not exist
	  To start a cluster, run: "minikube start -p download-only-380260"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-380260
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (48.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-399295 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-399295 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (48.589465341s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (48.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-399295
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-399295: exit status 85 (53.628987ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-039015 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-039015           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| delete  | -p download-only-039015           | download-only-039015 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC | 02 Aug 24 17:26 UTC |
	| start   | -o=json --download-only           | download-only-380260 | jenkins | v1.33.1 | 02 Aug 24 17:26 UTC |                     |
	|         | -p download-only-380260           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC | 02 Aug 24 17:27 UTC |
	| delete  | -p download-only-380260           | download-only-380260 | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC | 02 Aug 24 17:27 UTC |
	| start   | -o=json --download-only           | download-only-399295 | jenkins | v1.33.1 | 02 Aug 24 17:27 UTC |                     |
	|         | -p download-only-399295           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/02 17:27:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0802 17:27:01.148812   13052 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:27:01.149200   13052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:27:01.149215   13052 out.go:304] Setting ErrFile to fd 2...
	I0802 17:27:01.149222   13052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:27:01.149656   13052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:27:01.150216   13052 out.go:298] Setting JSON to true
	I0802 17:27:01.150997   13052 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":565,"bootTime":1722619056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:27:01.151048   13052 start.go:139] virtualization: kvm guest
	I0802 17:27:01.153544   13052 out.go:97] [download-only-399295] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:27:01.153684   13052 notify.go:220] Checking for updates...
	I0802 17:27:01.155148   13052 out.go:169] MINIKUBE_LOCATION=19355
	I0802 17:27:01.156550   13052 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:27:01.157928   13052 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:27:01.159363   13052 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:27:01.160730   13052 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0802 17:27:01.163015   13052 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0802 17:27:01.163247   13052 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:27:01.195350   13052 out.go:97] Using the kvm2 driver based on user configuration
	I0802 17:27:01.195379   13052 start.go:297] selected driver: kvm2
	I0802 17:27:01.195385   13052 start.go:901] validating driver "kvm2" against <nil>
	I0802 17:27:01.195697   13052 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:27:01.195778   13052 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-5397/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0802 17:27:01.209838   13052 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0802 17:27:01.209878   13052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0802 17:27:01.210290   13052 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0802 17:27:01.210422   13052 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0802 17:27:01.210473   13052 cni.go:84] Creating CNI manager for ""
	I0802 17:27:01.210484   13052 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0802 17:27:01.210498   13052 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0802 17:27:01.210546   13052 start.go:340] cluster config:
	{Name:download-only-399295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-399295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:27:01.210652   13052 iso.go:125] acquiring lock: {Name:mkf5b6fddf709f8fd31f61df8eb110af3677ce98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0802 17:27:01.212160   13052 out.go:97] Starting "download-only-399295" primary control-plane node in "download-only-399295" cluster
	I0802 17:27:01.212179   13052 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0802 17:27:01.721683   13052 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0802 17:27:01.721758   13052 cache.go:56] Caching tarball of preloaded images
	I0802 17:27:01.721936   13052 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0802 17:27:01.723711   13052 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0802 17:27:01.723730   13052 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0802 17:27:01.824159   13052 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:89b2d75682ccec9e5b50b57ad7b65741 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0802 17:27:17.114239   13052 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0802 17:27:17.114339   13052 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19355-5397/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0802 17:27:17.859353   13052 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0802 17:27:17.859680   13052 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/download-only-399295/config.json ...
	I0802 17:27:17.859707   13052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/download-only-399295/config.json: {Name:mkffb60fded117b37204271b31e534d079921c4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0802 17:27:17.859848   13052 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0802 17:27:17.859982   13052 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19355-5397/.minikube/cache/linux/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-399295 host does not exist
	  To start a cluster, run: "minikube start -p download-only-399295"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-399295
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-711292 --alsologtostderr --binary-mirror http://127.0.0.1:42613 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-711292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-711292
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (116.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-872961 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-872961 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m55.763882387s)
helpers_test.go:175: Cleaning up "offline-crio-872961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-872961
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-872961: (1.051792339s)
--- PASS: TestOffline (116.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-892214
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-892214: exit status 85 (44.417568ms)

                                                
                                                
-- stdout --
	* Profile "addons-892214" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-892214"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-892214
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-892214: exit status 85 (44.670463ms)

                                                
                                                
-- stdout --
	* Profile "addons-892214" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-892214"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (142.97s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-892214 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-892214 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m22.969024819s)
--- PASS: TestAddons/Setup (142.97s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-892214 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-892214 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.565523ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-cs8q7" [7d2c31bd-4360-46bd-82c0-b2258ba69944] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005380733s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ntww4" [59de3da3-a31c-480b-8715-6dcecc3c01e6] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004423831s
addons_test.go:342: (dbg) Run:  kubectl --context addons-892214 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-892214 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-892214 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.779430382s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 ip
2024/08/02 17:31:00 [DEBUG] GET http://192.168.39.4:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-m9znl" [f2c3ee30-ac50-40ae-b9b6-356346089e39] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004317279s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-892214
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-892214: (5.770258526s)
--- PASS: TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.35s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.651656ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-t67mn" [a61d96f6-f02c-4320-a0ef-8562603e4751] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004727384s
addons_test.go:475: (dbg) Run:  kubectl --context addons-892214 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-892214 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.767651692s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.887039ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-892214 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-892214 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a4ad4e94-4215-4074-8317-ee00b62ba804] Pending
helpers_test.go:344: "task-pv-pod" [a4ad4e94-4215-4074-8317-ee00b62ba804] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a4ad4e94-4215-4074-8317-ee00b62ba804] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.007178555s
addons_test.go:590: (dbg) Run:  kubectl --context addons-892214 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-892214 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-892214 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-892214 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-892214 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-892214 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-892214 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [15c7e7d8-59aa-4f72-884b-22ac067b07e5] Pending
helpers_test.go:344: "task-pv-pod-restore" [15c7e7d8-59aa-4f72-884b-22ac067b07e5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [15c7e7d8-59aa-4f72-884b-22ac067b07e5] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004508415s
addons_test.go:632: (dbg) Run:  kubectl --context addons-892214 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-892214 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-892214 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-892214 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.609504925s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.94s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-892214 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-z78fc" [a23d1b12-0c0c-49ae-8bc9-c265ff056157] Pending
helpers_test.go:344: "headlamp-7867546754-z78fc" [a23d1b12-0c0c-49ae-8bc9-c265ff056157] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-z78fc" [a23d1b12-0c0c-49ae-8bc9-c265ff056157] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005218045s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-892214 addons disable headlamp --alsologtostderr -v=1: (5.635679216s)
--- PASS: TestAddons/parallel/Headlamp (18.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-86hvm" [4dacb25b-4ca9-4d19-9368-bd71c8ee9e1c] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004570965s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-892214
--- PASS: TestAddons/parallel/CloudSpanner (6.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-892214 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-892214 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-892214 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7c474715-7a9e-4220-a076-dd847fc36dd0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7c474715-7a9e-4220-a076-dd847fc36dd0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7c474715-7a9e-4220-a076-dd847fc36dd0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005302373s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-892214 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 ssh "cat /opt/local-path-provisioner/pvc-a1b79ae1-93e6-47b1-8e06-9a59fcccfc8d_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-892214 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-892214 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7hdnl" [6af5e808-ef75-4f5b-8567-c08fc5f82515] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004587511s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-892214
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-2z7vn" [ec8af14f-243c-46f6-8d85-245acfc9bbbd] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00434109s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-892214 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-892214 addons disable yakd --alsologtostderr -v=1: (5.694808344s)
--- PASS: TestAddons/parallel/Yakd (10.70s)

                                                
                                    
x
+
TestCertOptions (48.23s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-643429 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-643429 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (47.009497625s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-643429 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-643429 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-643429 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-643429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-643429
--- PASS: TestCertOptions (48.23s)

                                                
                                    
x
+
TestCertExpiration (266.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-139745 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-139745 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (56.377463249s)
E0802 18:37:26.975212   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-139745 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
E0802 18:40:14.261623   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-139745 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (29.538618009s)
helpers_test.go:175: Cleaning up "cert-expiration-139745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-139745
--- PASS: TestCertExpiration (266.74s)

                                                
                                    
x
+
TestForceSystemdFlag (52.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-234725 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-234725 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.830785431s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-234725 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-234725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-234725
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-234725: (1.532533296s)
--- PASS: TestForceSystemdFlag (52.56s)

                                                
                                    
x
+
TestForceSystemdEnv (42.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-919916 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-919916 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.088087145s)
helpers_test.go:175: Cleaning up "force-systemd-env-919916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-919916
--- PASS: TestForceSystemdEnv (42.06s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.71s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.71s)

                                                
                                    
x
+
TestErrorSpam/setup (40.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-583957 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-583957 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-583957 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-583957 --driver=kvm2  --container-runtime=crio: (40.373987717s)
--- PASS: TestErrorSpam/setup (40.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (4.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 stop: (1.560759169s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 stop: (1.222793652s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 stop
E0802 17:40:14.261086   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:14.266759   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:14.277083   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:14.297426   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:14.337719   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:14.418092   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:14.578580   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:14.899185   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-583957 --log_dir /tmp/nospam-583957 stop: (1.564541408s)
--- PASS: TestErrorSpam/stop (4.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19355-5397/.minikube/files/etc/test/nested/copy/12547/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096349 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0802 17:40:15.539872   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:16.820305   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:19.381029   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:24.501565   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:34.742424   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:40:55.223267   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-096349 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.593731468s)
--- PASS: TestFunctional/serial/StartWithProxy (56.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096349 --alsologtostderr -v=8
E0802 17:41:36.183470   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-096349 --alsologtostderr -v=8: (37.566997051s)
functional_test.go:659: soft start took 37.567607242s for "functional-096349" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-096349 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-096349 cache add registry.k8s.io/pause:3.1: (1.312284608s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-096349 cache add registry.k8s.io/pause:3.3: (1.330118477s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-096349 cache add registry.k8s.io/pause:latest: (1.171417848s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-096349 /tmp/TestFunctionalserialCacheCmdcacheadd_local2255884647/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 cache add minikube-local-cache-test:functional-096349
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-096349 cache add minikube-local-cache-test:functional-096349: (1.688934156s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 cache delete minikube-local-cache-test:functional-096349
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-096349
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096349 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (204.891108ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 kubectl -- --context functional-096349 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-096349 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096349 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-096349 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.257597394s)
functional_test.go:757: restart took 38.257719949s for "functional-096349" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-096349 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-096349 logs: (1.440415106s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 logs --file /tmp/TestFunctionalserialLogsFileCmd3660426091/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-096349 logs --file /tmp/TestFunctionalserialLogsFileCmd3660426091/001/logs.txt: (1.399845417s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-096349 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-096349
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-096349: exit status 115 (264.29714ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.234:31390 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-096349 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096349 config get cpus: exit status 14 (55.267979ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096349 config get cpus: exit status 14 (48.04352ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-096349 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-096349 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22438: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096349 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-096349 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (143.758889ms)

                                                
                                                
-- stdout --
	* [functional-096349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:43:07.057638   22319 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:43:07.057954   22319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:43:07.057970   22319 out.go:304] Setting ErrFile to fd 2...
	I0802 17:43:07.057977   22319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:43:07.058265   22319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:43:07.059088   22319 out.go:298] Setting JSON to false
	I0802 17:43:07.060448   22319 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1531,"bootTime":1722619056,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:43:07.060530   22319 start.go:139] virtualization: kvm guest
	I0802 17:43:07.063059   22319 out.go:177] * [functional-096349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 17:43:07.064664   22319 notify.go:220] Checking for updates...
	I0802 17:43:07.064690   22319 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 17:43:07.066107   22319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:43:07.067486   22319 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:43:07.068831   22319 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:43:07.070173   22319 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 17:43:07.071518   22319 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 17:43:07.073350   22319 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:43:07.073963   22319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:43:07.074043   22319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:43:07.089144   22319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0802 17:43:07.089655   22319 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:43:07.090261   22319 main.go:141] libmachine: Using API Version  1
	I0802 17:43:07.090289   22319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:43:07.090708   22319 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:43:07.090872   22319 main.go:141] libmachine: (functional-096349) Calling .DriverName
	I0802 17:43:07.091232   22319 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:43:07.091628   22319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:43:07.091673   22319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:43:07.106071   22319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0802 17:43:07.106448   22319 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:43:07.106937   22319 main.go:141] libmachine: Using API Version  1
	I0802 17:43:07.106963   22319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:43:07.107321   22319 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:43:07.107547   22319 main.go:141] libmachine: (functional-096349) Calling .DriverName
	I0802 17:43:07.147376   22319 out.go:177] * Using the kvm2 driver based on existing profile
	I0802 17:43:07.148652   22319 start.go:297] selected driver: kvm2
	I0802 17:43:07.148664   22319 start.go:901] validating driver "kvm2" against &{Name:functional-096349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-096349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:43:07.148794   22319 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 17:43:07.150752   22319 out.go:177] 
	W0802 17:43:07.151988   22319 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0802 17:43:07.153241   22319 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096349 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096349 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-096349 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.129323ms)

                                                
                                                
-- stdout --
	* [functional-096349] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 17:43:07.326991   22374 out.go:291] Setting OutFile to fd 1 ...
	I0802 17:43:07.327150   22374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:43:07.327163   22374 out.go:304] Setting ErrFile to fd 2...
	I0802 17:43:07.327170   22374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 17:43:07.327588   22374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 17:43:07.328295   22374 out.go:298] Setting JSON to false
	I0802 17:43:07.329564   22374 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1531,"bootTime":1722619056,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 17:43:07.329654   22374 start.go:139] virtualization: kvm guest
	I0802 17:43:07.332063   22374 out.go:177] * [functional-096349] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0802 17:43:07.333482   22374 notify.go:220] Checking for updates...
	I0802 17:43:07.333506   22374 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 17:43:07.334862   22374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 17:43:07.336301   22374 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 17:43:07.337662   22374 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 17:43:07.339076   22374 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 17:43:07.340486   22374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 17:43:07.342152   22374 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 17:43:07.342543   22374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:43:07.342590   22374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:43:07.357279   22374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41713
	I0802 17:43:07.357737   22374 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:43:07.358274   22374 main.go:141] libmachine: Using API Version  1
	I0802 17:43:07.358292   22374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:43:07.358617   22374 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:43:07.358837   22374 main.go:141] libmachine: (functional-096349) Calling .DriverName
	I0802 17:43:07.359073   22374 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 17:43:07.359384   22374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 17:43:07.359434   22374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 17:43:07.374445   22374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37363
	I0802 17:43:07.374875   22374 main.go:141] libmachine: () Calling .GetVersion
	I0802 17:43:07.375488   22374 main.go:141] libmachine: Using API Version  1
	I0802 17:43:07.375528   22374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 17:43:07.375881   22374 main.go:141] libmachine: () Calling .GetMachineName
	I0802 17:43:07.376076   22374 main.go:141] libmachine: (functional-096349) Calling .DriverName
	I0802 17:43:07.413935   22374 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0802 17:43:07.415230   22374 start.go:297] selected driver: kvm2
	I0802 17:43:07.415259   22374 start.go:901] validating driver "kvm2" against &{Name:functional-096349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-096349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0802 17:43:07.415381   22374 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 17:43:07.417482   22374 out.go:177] 
	W0802 17:43:07.418661   22374 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0802 17:43:07.419994   22374 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-096349 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-096349 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-hx2zb" [b765790a-e34b-48f8-959b-9fd369bc3d63] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-hx2zb" [b765790a-e34b-48f8-959b-9fd369bc3d63] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003670292s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.50.234:32196
functional_test.go:1671: http://192.168.50.234:32196: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-hx2zb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.234:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.234:32196
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [85284712-3f3f-4bb6-8388-51d68b139a09] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004084831s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-096349 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-096349 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-096349 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-096349 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2d2e19ef-f12c-41e0-9d7d-f6f2790450e5] Pending
helpers_test.go:344: "sp-pod" [2d2e19ef-f12c-41e0-9d7d-f6f2790450e5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2d2e19ef-f12c-41e0-9d7d-f6f2790450e5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.005050207s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-096349 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-096349 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-096349 delete -f testdata/storage-provisioner/pod.yaml: (1.494606234s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-096349 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f294caaa-962b-4281-afea-d9488b55e298] Pending
helpers_test.go:344: "sp-pod" [f294caaa-962b-4281-afea-d9488b55e298] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f294caaa-962b-4281-afea-d9488b55e298] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.005219186s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-096349 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh -n functional-096349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 cp functional-096349:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd313238401/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh -n functional-096349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh -n functional-096349 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-096349 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-28hdr" [487d25ca-6af2-454c-8dba-66c645b19478] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-28hdr" [487d25ca-6af2-454c-8dba-66c645b19478] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.00368697s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-096349 exec mysql-64454c8b5c-28hdr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-096349 exec mysql-64454c8b5c-28hdr -- mysql -ppassword -e "show databases;": exit status 1 (231.473825ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-096349 exec mysql-64454c8b5c-28hdr -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-096349 exec mysql-64454c8b5c-28hdr -- mysql -ppassword -e "show databases;": exit status 1 (161.797999ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-096349 exec mysql-64454c8b5c-28hdr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12547/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "sudo cat /etc/test/nested/copy/12547/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12547.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "sudo cat /etc/ssl/certs/12547.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12547.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "sudo cat /usr/share/ca-certificates/12547.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/125472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "sudo cat /etc/ssl/certs/125472.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/125472.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "sudo cat /usr/share/ca-certificates/125472.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-096349 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096349 ssh "sudo systemctl is-active docker": exit status 1 (201.610581ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096349 ssh "sudo systemctl is-active containerd": exit status 1 (219.211901ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096349 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-096349
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-096349
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096349 image ls --format short --alsologtostderr:
I0802 17:43:20.302091   23040 out.go:291] Setting OutFile to fd 1 ...
I0802 17:43:20.302225   23040 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:43:20.302237   23040 out.go:304] Setting ErrFile to fd 2...
I0802 17:43:20.302244   23040 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:43:20.302540   23040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
I0802 17:43:20.303337   23040 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0802 17:43:20.303495   23040 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0802 17:43:20.304048   23040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0802 17:43:20.304099   23040 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:43:20.319302   23040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
I0802 17:43:20.319711   23040 main.go:141] libmachine: () Calling .GetVersion
I0802 17:43:20.320341   23040 main.go:141] libmachine: Using API Version  1
I0802 17:43:20.320387   23040 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:43:20.320830   23040 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:43:20.321020   23040 main.go:141] libmachine: (functional-096349) Calling .GetState
I0802 17:43:20.322993   23040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0802 17:43:20.323059   23040 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:43:20.337833   23040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
I0802 17:43:20.338192   23040 main.go:141] libmachine: () Calling .GetVersion
I0802 17:43:20.338630   23040 main.go:141] libmachine: Using API Version  1
I0802 17:43:20.338651   23040 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:43:20.338973   23040 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:43:20.339132   23040 main.go:141] libmachine: (functional-096349) Calling .DriverName
I0802 17:43:20.339348   23040 ssh_runner.go:195] Run: systemctl --version
I0802 17:43:20.339389   23040 main.go:141] libmachine: (functional-096349) Calling .GetSSHHostname
I0802 17:43:20.341724   23040 main.go:141] libmachine: (functional-096349) DBG | domain functional-096349 has defined MAC address 52:54:00:f5:d9:9a in network mk-functional-096349
I0802 17:43:20.342060   23040 main.go:141] libmachine: (functional-096349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:d9:9a", ip: ""} in network mk-functional-096349: {Iface:virbr1 ExpiryTime:2024-08-02 18:40:29 +0000 UTC Type:0 Mac:52:54:00:f5:d9:9a Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:functional-096349 Clientid:01:52:54:00:f5:d9:9a}
I0802 17:43:20.342085   23040 main.go:141] libmachine: (functional-096349) DBG | domain functional-096349 has defined IP address 192.168.50.234 and MAC address 52:54:00:f5:d9:9a in network mk-functional-096349
I0802 17:43:20.342160   23040 main.go:141] libmachine: (functional-096349) Calling .GetSSHPort
I0802 17:43:20.342319   23040 main.go:141] libmachine: (functional-096349) Calling .GetSSHKeyPath
I0802 17:43:20.342435   23040 main.go:141] libmachine: (functional-096349) Calling .GetSSHUsername
I0802 17:43:20.342568   23040 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/functional-096349/id_rsa Username:docker}
I0802 17:43:20.449401   23040 ssh_runner.go:195] Run: sudo crictl images --output json
I0802 17:43:20.504068   23040 main.go:141] libmachine: Making call to close driver server
I0802 17:43:20.504086   23040 main.go:141] libmachine: (functional-096349) Calling .Close
I0802 17:43:20.504375   23040 main.go:141] libmachine: (functional-096349) DBG | Closing plugin on server side
I0802 17:43:20.504440   23040 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:43:20.504452   23040 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:43:20.504466   23040 main.go:141] libmachine: Making call to close driver server
I0802 17:43:20.504478   23040 main.go:141] libmachine: (functional-096349) Calling .Close
I0802 17:43:20.504715   23040 main.go:141] libmachine: (functional-096349) DBG | Closing plugin on server side
I0802 17:43:20.504723   23040 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:43:20.504761   23040 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096349 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kicbase/echo-server           | functional-096349  | 9056ab77afb8e | 4.94MB |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/minikube-local-cache-test     | functional-096349  | c656514733a48 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096349 image ls --format table --alsologtostderr:
I0802 17:43:21.402625   23174 out.go:291] Setting OutFile to fd 1 ...
I0802 17:43:21.402773   23174 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:43:21.402785   23174 out.go:304] Setting ErrFile to fd 2...
I0802 17:43:21.402791   23174 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:43:21.402987   23174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
I0802 17:43:21.403636   23174 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0802 17:43:21.403768   23174 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0802 17:43:21.404158   23174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0802 17:43:21.404215   23174 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:43:21.419401   23174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38443
I0802 17:43:21.419810   23174 main.go:141] libmachine: () Calling .GetVersion
I0802 17:43:21.420406   23174 main.go:141] libmachine: Using API Version  1
I0802 17:43:21.420437   23174 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:43:21.420804   23174 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:43:21.421002   23174 main.go:141] libmachine: (functional-096349) Calling .GetState
I0802 17:43:21.422705   23174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0802 17:43:21.422741   23174 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:43:21.438033   23174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
I0802 17:43:21.438438   23174 main.go:141] libmachine: () Calling .GetVersion
I0802 17:43:21.438892   23174 main.go:141] libmachine: Using API Version  1
I0802 17:43:21.438917   23174 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:43:21.439304   23174 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:43:21.439508   23174 main.go:141] libmachine: (functional-096349) Calling .DriverName
I0802 17:43:21.439747   23174 ssh_runner.go:195] Run: systemctl --version
I0802 17:43:21.439772   23174 main.go:141] libmachine: (functional-096349) Calling .GetSSHHostname
I0802 17:43:21.442503   23174 main.go:141] libmachine: (functional-096349) DBG | domain functional-096349 has defined MAC address 52:54:00:f5:d9:9a in network mk-functional-096349
I0802 17:43:21.442906   23174 main.go:141] libmachine: (functional-096349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:d9:9a", ip: ""} in network mk-functional-096349: {Iface:virbr1 ExpiryTime:2024-08-02 18:40:29 +0000 UTC Type:0 Mac:52:54:00:f5:d9:9a Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:functional-096349 Clientid:01:52:54:00:f5:d9:9a}
I0802 17:43:21.442940   23174 main.go:141] libmachine: (functional-096349) DBG | domain functional-096349 has defined IP address 192.168.50.234 and MAC address 52:54:00:f5:d9:9a in network mk-functional-096349
I0802 17:43:21.443009   23174 main.go:141] libmachine: (functional-096349) Calling .GetSSHPort
I0802 17:43:21.443201   23174 main.go:141] libmachine: (functional-096349) Calling .GetSSHKeyPath
I0802 17:43:21.443457   23174 main.go:141] libmachine: (functional-096349) Calling .GetSSHUsername
I0802 17:43:21.443620   23174 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/functional-096349/id_rsa Username:docker}
I0802 17:43:21.540383   23174 ssh_runner.go:195] Run: sudo crictl images --output json
I0802 17:43:21.605005   23174 main.go:141] libmachine: Making call to close driver server
I0802 17:43:21.605082   23174 main.go:141] libmachine: (functional-096349) Calling .Close
I0802 17:43:21.605339   23174 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:43:21.605387   23174 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:43:21.605411   23174 main.go:141] libmachine: Making call to close driver server
I0802 17:43:21.605419   23174 main.go:141] libmachine: (functional-096349) Calling .Close
I0802 17:43:21.605689   23174 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:43:21.605704   23174 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:43:21.605833   23174 main.go:141] libmachine: (functional-096349) DBG | Closing plugin on server side
2024/08/02 17:43:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096349 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-096349"],"size":"4943877"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":
"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kub
e-scheduler:v1.30.3"],"size":"63051080"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"5107333e08a87b836d48ff7
528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c656514733a48182bbf0450af8cbb8a536934ad761d4ab4fda5ff6289f9c12aa","repoDigests":["localhost/minikube-local-cache-test@sha256:c103f5f27ffcb94833304bb144916f5c1234d3d94cf44f8a1984b4e3bcef8663"],"repoTags":["localhost/minikube-local-cache-test:functional-096349"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d73
3a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["regist
ry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096349 image ls --format json --alsologtostderr:
I0802 17:43:21.176290   23129 out.go:291] Setting OutFile to fd 1 ...
I0802 17:43:21.176442   23129 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:43:21.176457   23129 out.go:304] Setting ErrFile to fd 2...
I0802 17:43:21.176464   23129 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:43:21.176659   23129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
I0802 17:43:21.177233   23129 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0802 17:43:21.177354   23129 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0802 17:43:21.177745   23129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0802 17:43:21.177799   23129 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:43:21.194134   23129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43931
I0802 17:43:21.194671   23129 main.go:141] libmachine: () Calling .GetVersion
I0802 17:43:21.195304   23129 main.go:141] libmachine: Using API Version  1
I0802 17:43:21.195335   23129 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:43:21.195756   23129 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:43:21.195944   23129 main.go:141] libmachine: (functional-096349) Calling .GetState
I0802 17:43:21.198017   23129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0802 17:43:21.198063   23129 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:43:21.213337   23129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45455
I0802 17:43:21.213748   23129 main.go:141] libmachine: () Calling .GetVersion
I0802 17:43:21.214297   23129 main.go:141] libmachine: Using API Version  1
I0802 17:43:21.214322   23129 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:43:21.214652   23129 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:43:21.214860   23129 main.go:141] libmachine: (functional-096349) Calling .DriverName
I0802 17:43:21.215028   23129 ssh_runner.go:195] Run: systemctl --version
I0802 17:43:21.215066   23129 main.go:141] libmachine: (functional-096349) Calling .GetSSHHostname
I0802 17:43:21.218032   23129 main.go:141] libmachine: (functional-096349) DBG | domain functional-096349 has defined MAC address 52:54:00:f5:d9:9a in network mk-functional-096349
I0802 17:43:21.218439   23129 main.go:141] libmachine: (functional-096349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:d9:9a", ip: ""} in network mk-functional-096349: {Iface:virbr1 ExpiryTime:2024-08-02 18:40:29 +0000 UTC Type:0 Mac:52:54:00:f5:d9:9a Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:functional-096349 Clientid:01:52:54:00:f5:d9:9a}
I0802 17:43:21.218468   23129 main.go:141] libmachine: (functional-096349) DBG | domain functional-096349 has defined IP address 192.168.50.234 and MAC address 52:54:00:f5:d9:9a in network mk-functional-096349
I0802 17:43:21.218611   23129 main.go:141] libmachine: (functional-096349) Calling .GetSSHPort
I0802 17:43:21.218747   23129 main.go:141] libmachine: (functional-096349) Calling .GetSSHKeyPath
I0802 17:43:21.218892   23129 main.go:141] libmachine: (functional-096349) Calling .GetSSHUsername
I0802 17:43:21.219034   23129 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/functional-096349/id_rsa Username:docker}
I0802 17:43:21.309835   23129 ssh_runner.go:195] Run: sudo crictl images --output json
I0802 17:43:21.355562   23129 main.go:141] libmachine: Making call to close driver server
I0802 17:43:21.355574   23129 main.go:141] libmachine: (functional-096349) Calling .Close
I0802 17:43:21.355873   23129 main.go:141] libmachine: (functional-096349) DBG | Closing plugin on server side
I0802 17:43:21.355892   23129 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:43:21.355902   23129 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:43:21.355913   23129 main.go:141] libmachine: Making call to close driver server
I0802 17:43:21.355925   23129 main.go:141] libmachine: (functional-096349) Calling .Close
I0802 17:43:21.356135   23129 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:43:21.356147   23129 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096349 image ls --format yaml --alsologtostderr:
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c656514733a48182bbf0450af8cbb8a536934ad761d4ab4fda5ff6289f9c12aa
repoDigests:
- localhost/minikube-local-cache-test@sha256:c103f5f27ffcb94833304bb144916f5c1234d3d94cf44f8a1984b4e3bcef8663
repoTags:
- localhost/minikube-local-cache-test:functional-096349
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-096349
size: "4943877"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096349 image ls --format yaml --alsologtostderr:
I0802 17:43:20.553845   23075 out.go:291] Setting OutFile to fd 1 ...
I0802 17:43:20.553942   23075 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:43:20.553950   23075 out.go:304] Setting ErrFile to fd 2...
I0802 17:43:20.553954   23075 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:43:20.554127   23075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
I0802 17:43:20.554653   23075 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0802 17:43:20.554750   23075 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0802 17:43:20.555219   23075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0802 17:43:20.555257   23075 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:43:20.569910   23075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35327
I0802 17:43:20.570333   23075 main.go:141] libmachine: () Calling .GetVersion
I0802 17:43:20.570867   23075 main.go:141] libmachine: Using API Version  1
I0802 17:43:20.570892   23075 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:43:20.571192   23075 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:43:20.571374   23075 main.go:141] libmachine: (functional-096349) Calling .GetState
I0802 17:43:20.573495   23075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0802 17:43:20.573551   23075 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:43:20.588816   23075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33397
I0802 17:43:20.589315   23075 main.go:141] libmachine: () Calling .GetVersion
I0802 17:43:20.589855   23075 main.go:141] libmachine: Using API Version  1
I0802 17:43:20.589896   23075 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:43:20.590216   23075 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:43:20.590403   23075 main.go:141] libmachine: (functional-096349) Calling .DriverName
I0802 17:43:20.590618   23075 ssh_runner.go:195] Run: systemctl --version
I0802 17:43:20.590660   23075 main.go:141] libmachine: (functional-096349) Calling .GetSSHHostname
I0802 17:43:20.593201   23075 main.go:141] libmachine: (functional-096349) DBG | domain functional-096349 has defined MAC address 52:54:00:f5:d9:9a in network mk-functional-096349
I0802 17:43:20.593581   23075 main.go:141] libmachine: (functional-096349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:d9:9a", ip: ""} in network mk-functional-096349: {Iface:virbr1 ExpiryTime:2024-08-02 18:40:29 +0000 UTC Type:0 Mac:52:54:00:f5:d9:9a Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:functional-096349 Clientid:01:52:54:00:f5:d9:9a}
I0802 17:43:20.593623   23075 main.go:141] libmachine: (functional-096349) DBG | domain functional-096349 has defined IP address 192.168.50.234 and MAC address 52:54:00:f5:d9:9a in network mk-functional-096349
I0802 17:43:20.593746   23075 main.go:141] libmachine: (functional-096349) Calling .GetSSHPort
I0802 17:43:20.593889   23075 main.go:141] libmachine: (functional-096349) Calling .GetSSHKeyPath
I0802 17:43:20.594017   23075 main.go:141] libmachine: (functional-096349) Calling .GetSSHUsername
I0802 17:43:20.594178   23075 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/functional-096349/id_rsa Username:docker}
I0802 17:43:20.711040   23075 ssh_runner.go:195] Run: sudo crictl images --output json
I0802 17:43:20.801982   23075 main.go:141] libmachine: Making call to close driver server
I0802 17:43:20.802025   23075 main.go:141] libmachine: (functional-096349) Calling .Close
I0802 17:43:20.802409   23075 main.go:141] libmachine: (functional-096349) DBG | Closing plugin on server side
I0802 17:43:20.802409   23075 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:43:20.802445   23075 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:43:20.802458   23075 main.go:141] libmachine: Making call to close driver server
I0802 17:43:20.802470   23075 main.go:141] libmachine: (functional-096349) Calling .Close
I0802 17:43:20.802721   23075 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:43:20.802739   23075 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:43:20.802757   23075 main.go:141] libmachine: (functional-096349) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096349 ssh pgrep buildkitd: exit status 1 (231.266455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image build -t localhost/my-image:functional-096349 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-096349 image build -t localhost/my-image:functional-096349 testdata/build --alsologtostderr: (4.918807509s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096349 image build -t localhost/my-image:functional-096349 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 56d2a62f98b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-096349
--> 1329ce187f3
Successfully tagged localhost/my-image:functional-096349
1329ce187f335c522159dc534b3837c131cd07fb35e09033edd6d1b522c25d3f
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096349 image build -t localhost/my-image:functional-096349 testdata/build --alsologtostderr:
I0802 17:43:21.200546   23140 out.go:291] Setting OutFile to fd 1 ...
I0802 17:43:21.200699   23140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:43:21.200711   23140 out.go:304] Setting ErrFile to fd 2...
I0802 17:43:21.200717   23140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0802 17:43:21.200972   23140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
I0802 17:43:21.201732   23140 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0802 17:43:21.202281   23140 config.go:182] Loaded profile config "functional-096349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0802 17:43:21.202659   23140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0802 17:43:21.202705   23140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:43:21.218338   23140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44711
I0802 17:43:21.218965   23140 main.go:141] libmachine: () Calling .GetVersion
I0802 17:43:21.219593   23140 main.go:141] libmachine: Using API Version  1
I0802 17:43:21.219617   23140 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:43:21.219922   23140 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:43:21.220127   23140 main.go:141] libmachine: (functional-096349) Calling .GetState
I0802 17:43:21.221572   23140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0802 17:43:21.221606   23140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0802 17:43:21.236011   23140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43587
I0802 17:43:21.236495   23140 main.go:141] libmachine: () Calling .GetVersion
I0802 17:43:21.237017   23140 main.go:141] libmachine: Using API Version  1
I0802 17:43:21.237041   23140 main.go:141] libmachine: () Calling .SetConfigRaw
I0802 17:43:21.237338   23140 main.go:141] libmachine: () Calling .GetMachineName
I0802 17:43:21.237500   23140 main.go:141] libmachine: (functional-096349) Calling .DriverName
I0802 17:43:21.237698   23140 ssh_runner.go:195] Run: systemctl --version
I0802 17:43:21.237726   23140 main.go:141] libmachine: (functional-096349) Calling .GetSSHHostname
I0802 17:43:21.240342   23140 main.go:141] libmachine: (functional-096349) DBG | domain functional-096349 has defined MAC address 52:54:00:f5:d9:9a in network mk-functional-096349
I0802 17:43:21.240718   23140 main.go:141] libmachine: (functional-096349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:d9:9a", ip: ""} in network mk-functional-096349: {Iface:virbr1 ExpiryTime:2024-08-02 18:40:29 +0000 UTC Type:0 Mac:52:54:00:f5:d9:9a Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:functional-096349 Clientid:01:52:54:00:f5:d9:9a}
I0802 17:43:21.240758   23140 main.go:141] libmachine: (functional-096349) DBG | domain functional-096349 has defined IP address 192.168.50.234 and MAC address 52:54:00:f5:d9:9a in network mk-functional-096349
I0802 17:43:21.240942   23140 main.go:141] libmachine: (functional-096349) Calling .GetSSHPort
I0802 17:43:21.241089   23140 main.go:141] libmachine: (functional-096349) Calling .GetSSHKeyPath
I0802 17:43:21.241240   23140 main.go:141] libmachine: (functional-096349) Calling .GetSSHUsername
I0802 17:43:21.241390   23140 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/functional-096349/id_rsa Username:docker}
I0802 17:43:21.336534   23140 build_images.go:161] Building image from path: /tmp/build.2695355234.tar
I0802 17:43:21.336615   23140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0802 17:43:21.361667   23140 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2695355234.tar
I0802 17:43:21.366242   23140 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2695355234.tar: stat -c "%s %y" /var/lib/minikube/build/build.2695355234.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2695355234.tar': No such file or directory
I0802 17:43:21.366278   23140 ssh_runner.go:362] scp /tmp/build.2695355234.tar --> /var/lib/minikube/build/build.2695355234.tar (3072 bytes)
I0802 17:43:21.403743   23140 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2695355234
I0802 17:43:21.426596   23140 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2695355234 -xf /var/lib/minikube/build/build.2695355234.tar
I0802 17:43:21.465755   23140 crio.go:315] Building image: /var/lib/minikube/build/build.2695355234
I0802 17:43:21.465826   23140 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-096349 /var/lib/minikube/build/build.2695355234 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0802 17:43:26.040081   23140 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-096349 /var/lib/minikube/build/build.2695355234 --cgroup-manager=cgroupfs: (4.574229641s)
I0802 17:43:26.040161   23140 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2695355234
I0802 17:43:26.052514   23140 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2695355234.tar
I0802 17:43:26.063508   23140 build_images.go:217] Built localhost/my-image:functional-096349 from /tmp/build.2695355234.tar
I0802 17:43:26.063542   23140 build_images.go:133] succeeded building to: functional-096349
I0802 17:43:26.063546   23140 build_images.go:134] failed building to: 
I0802 17:43:26.063572   23140 main.go:141] libmachine: Making call to close driver server
I0802 17:43:26.063584   23140 main.go:141] libmachine: (functional-096349) Calling .Close
I0802 17:43:26.063950   23140 main.go:141] libmachine: (functional-096349) DBG | Closing plugin on server side
I0802 17:43:26.063955   23140 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:43:26.063973   23140 main.go:141] libmachine: Making call to close connection to plugin binary
I0802 17:43:26.063983   23140 main.go:141] libmachine: Making call to close driver server
I0802 17:43:26.063991   23140 main.go:141] libmachine: (functional-096349) Calling .Close
I0802 17:43:26.064188   23140 main.go:141] libmachine: Successfully made call to close driver server
I0802 17:43:26.064202   23140 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.746035816s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-096349
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-096349 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-096349 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-ptwgt" [0b0409d1-e073-4e3b-aacb-dac5f47096c4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-ptwgt" [0b0409d1-e073-4e3b-aacb-dac5f47096c4] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.004414995s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image load --daemon docker.io/kicbase/echo-server:functional-096349 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-096349 image load --daemon docker.io/kicbase/echo-server:functional-096349 --alsologtostderr: (2.411130456s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image load --daemon docker.io/kicbase/echo-server:functional-096349 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-096349
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image load --daemon docker.io/kicbase/echo-server:functional-096349 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image save docker.io/kicbase/echo-server:functional-096349 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image rm docker.io/kicbase/echo-server:functional-096349 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-096349
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 image save --daemon docker.io/kicbase/echo-server:functional-096349 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-096349
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "210.619431ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "41.887888ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "206.594724ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "41.127531ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (18.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096349 /tmp/TestFunctionalparallelMountCmdany-port2990488256/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722620577987183451" to /tmp/TestFunctionalparallelMountCmdany-port2990488256/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722620577987183451" to /tmp/TestFunctionalparallelMountCmdany-port2990488256/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722620577987183451" to /tmp/TestFunctionalparallelMountCmdany-port2990488256/001/test-1722620577987183451
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "findmnt -T /mount-9p | grep 9p"
E0802 17:42:58.104099   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096349 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (189.913454ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  2 17:42 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  2 17:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  2 17:42 test-1722620577987183451
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh cat /mount-9p/test-1722620577987183451
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-096349 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e337c75f-843e-46f6-b07e-9372f22e5871] Pending
helpers_test.go:344: "busybox-mount" [e337c75f-843e-46f6-b07e-9372f22e5871] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e337c75f-843e-46f6-b07e-9372f22e5871] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e337c75f-843e-46f6-b07e-9372f22e5871] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.012259157s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-096349 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096349 /tmp/TestFunctionalparallelMountCmdany-port2990488256/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (18.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 service list -o json
functional_test.go:1490: Took "471.69331ms" to run "out/minikube-linux-amd64 -p functional-096349 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.50.234:31776
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.50.234:31776
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096349 /tmp/TestFunctionalparallelMountCmdspecific-port3283919560/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096349 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (186.945538ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096349 /tmp/TestFunctionalparallelMountCmdspecific-port3283919560/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096349 ssh "sudo umount -f /mount-9p": exit status 1 (183.65074ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-096349 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096349 /tmp/TestFunctionalparallelMountCmdspecific-port3283919560/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2521433015/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2521433015/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2521433015/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096349 ssh "findmnt -T" /mount1: exit status 1 (221.347843ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096349 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-096349 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2521433015/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2521433015/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096349 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2521433015/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-096349
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-096349
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-096349
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-652395 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0802 17:45:14.261602   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 17:45:41.944336   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-652395 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m26.334184012s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (206.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-652395 -- rollout status deployment/busybox: (3.979828322s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-4gkm6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-lwm5m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-wwdvm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-4gkm6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-lwm5m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-wwdvm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-4gkm6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-lwm5m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-wwdvm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-4gkm6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-4gkm6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-lwm5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-lwm5m -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-wwdvm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-652395 -- exec busybox-fc5497c4f-wwdvm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-652395 -v=7 --alsologtostderr
E0802 17:47:43.927298   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:47:43.932591   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:47:43.942873   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:47:43.963219   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:47:44.003646   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:47:44.084028   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:47:44.244444   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:47:44.565169   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:47:45.205519   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:47:46.486334   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:47:49.046856   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 17:47:54.167228   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-652395 -v=7 --alsologtostderr: (54.608713825s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-652395 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp testdata/cp-test.txt ha-652395:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2210744680/001/cp-test_ha-652395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395:/home/docker/cp-test.txt ha-652395-m02:/home/docker/cp-test_ha-652395_ha-652395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m02 "sudo cat /home/docker/cp-test_ha-652395_ha-652395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395:/home/docker/cp-test.txt ha-652395-m03:/home/docker/cp-test_ha-652395_ha-652395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m03 "sudo cat /home/docker/cp-test_ha-652395_ha-652395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395:/home/docker/cp-test.txt ha-652395-m04:/home/docker/cp-test_ha-652395_ha-652395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m04 "sudo cat /home/docker/cp-test_ha-652395_ha-652395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp testdata/cp-test.txt ha-652395-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2210744680/001/cp-test_ha-652395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m02:/home/docker/cp-test.txt ha-652395:/home/docker/cp-test_ha-652395-m02_ha-652395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395 "sudo cat /home/docker/cp-test_ha-652395-m02_ha-652395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m02:/home/docker/cp-test.txt ha-652395-m03:/home/docker/cp-test_ha-652395-m02_ha-652395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m03 "sudo cat /home/docker/cp-test_ha-652395-m02_ha-652395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m02:/home/docker/cp-test.txt ha-652395-m04:/home/docker/cp-test_ha-652395-m02_ha-652395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m04 "sudo cat /home/docker/cp-test_ha-652395-m02_ha-652395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp testdata/cp-test.txt ha-652395-m03:/home/docker/cp-test.txt
E0802 17:48:04.407715   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2210744680/001/cp-test_ha-652395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt ha-652395:/home/docker/cp-test_ha-652395-m03_ha-652395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395 "sudo cat /home/docker/cp-test_ha-652395-m03_ha-652395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt ha-652395-m02:/home/docker/cp-test_ha-652395-m03_ha-652395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m02 "sudo cat /home/docker/cp-test_ha-652395-m03_ha-652395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m03:/home/docker/cp-test.txt ha-652395-m04:/home/docker/cp-test_ha-652395-m03_ha-652395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m04 "sudo cat /home/docker/cp-test_ha-652395-m03_ha-652395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp testdata/cp-test.txt ha-652395-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2210744680/001/cp-test_ha-652395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt ha-652395:/home/docker/cp-test_ha-652395-m04_ha-652395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395 "sudo cat /home/docker/cp-test_ha-652395-m04_ha-652395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt ha-652395-m02:/home/docker/cp-test_ha-652395-m04_ha-652395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m02 "sudo cat /home/docker/cp-test_ha-652395-m04_ha-652395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 cp ha-652395-m04:/home/docker/cp-test.txt ha-652395-m03:/home/docker/cp-test_ha-652395-m04_ha-652395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 ssh -n ha-652395-m03 "sudo cat /home/docker/cp-test_ha-652395-m04_ha-652395-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.458246891s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-652395 node delete m03 -v=7 --alsologtostderr: (16.395149097s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (292.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-652395 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0802 18:02:43.928054   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 18:04:06.973990   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 18:05:14.261478   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-652395 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m51.83397416s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (292.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-652395 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-652395 --control-plane -v=7 --alsologtostderr: (1m17.866730199s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-652395 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestJSONOutput/start/Command (95.56s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-047250 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0802 18:07:43.927262   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-047250 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m35.562431579s)
--- PASS: TestJSONOutput/start/Command (95.56s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-047250 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-047250 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.58s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-047250 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-047250 --output=json --user=testUser: (6.576859991s)
--- PASS: TestJSONOutput/stop/Command (6.58s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-393289 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-393289 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.225403ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a217e57e-a911-4c4e-9eac-8316c0949c5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-393289] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0363ca95-34c1-489a-8c22-49539d60d582","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19355"}}
	{"specversion":"1.0","id":"47ead36a-a775-43c6-a430-f6c9eac61e86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e2f5dea2-5db1-4930-a812-fbb3391d4e9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig"}}
	{"specversion":"1.0","id":"697e3009-5ad0-4383-a8d2-52415e3685b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube"}}
	{"specversion":"1.0","id":"1c363de6-a722-4897-a034-0226c60221f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2d14d966-7b46-45b8-8abf-78a153fa45b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b7a041d7-1a1b-463f-8d52-42ea4d50e3c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-393289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-393289
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (83.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-502956 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-502956 --driver=kvm2  --container-runtime=crio: (37.605020175s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-505486 --driver=kvm2  --container-runtime=crio
E0802 18:10:14.261061   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-505486 --driver=kvm2  --container-runtime=crio: (43.002089784s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-502956
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-505486
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-505486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-505486
helpers_test.go:175: Cleaning up "first-502956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-502956
--- PASS: TestMinikubeProfile (83.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-951581 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-951581 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.356468407s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-951581 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-951581 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-967861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-967861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.916476126s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-967861 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-967861 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-951581 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-967861 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-967861 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-967861
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-967861: (1.277740253s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-967861
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-967861: (21.998925114s)
--- PASS: TestMountStart/serial/RestartStopped (23.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-967861 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-967861 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (117.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-250383 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0802 18:12:43.927927   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
E0802 18:13:17.304838   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-250383 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m56.674824219s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (117.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-250383 -- rollout status deployment/busybox: (3.794158477s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- exec busybox-fc5497c4f-4hzwq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- exec busybox-fc5497c4f-6vqf8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- exec busybox-fc5497c4f-4hzwq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- exec busybox-fc5497c4f-6vqf8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- exec busybox-fc5497c4f-4hzwq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- exec busybox-fc5497c4f-6vqf8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- exec busybox-fc5497c4f-4hzwq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- exec busybox-fc5497c4f-4hzwq -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- exec busybox-fc5497c4f-6vqf8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-250383 -- exec busybox-fc5497c4f-6vqf8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-250383 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-250383 -v 3 --alsologtostderr: (51.53888269s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-250383 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp testdata/cp-test.txt multinode-250383:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp multinode-250383:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile879850024/001/cp-test_multinode-250383.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp multinode-250383:/home/docker/cp-test.txt multinode-250383-m02:/home/docker/cp-test_multinode-250383_multinode-250383-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m02 "sudo cat /home/docker/cp-test_multinode-250383_multinode-250383-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp multinode-250383:/home/docker/cp-test.txt multinode-250383-m03:/home/docker/cp-test_multinode-250383_multinode-250383-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m03 "sudo cat /home/docker/cp-test_multinode-250383_multinode-250383-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp testdata/cp-test.txt multinode-250383-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp multinode-250383-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile879850024/001/cp-test_multinode-250383-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp multinode-250383-m02:/home/docker/cp-test.txt multinode-250383:/home/docker/cp-test_multinode-250383-m02_multinode-250383.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383 "sudo cat /home/docker/cp-test_multinode-250383-m02_multinode-250383.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp multinode-250383-m02:/home/docker/cp-test.txt multinode-250383-m03:/home/docker/cp-test_multinode-250383-m02_multinode-250383-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m03 "sudo cat /home/docker/cp-test_multinode-250383-m02_multinode-250383-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp testdata/cp-test.txt multinode-250383-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp multinode-250383-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile879850024/001/cp-test_multinode-250383-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp multinode-250383-m03:/home/docker/cp-test.txt multinode-250383:/home/docker/cp-test_multinode-250383-m03_multinode-250383.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383 "sudo cat /home/docker/cp-test_multinode-250383-m03_multinode-250383.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 cp multinode-250383-m03:/home/docker/cp-test.txt multinode-250383-m02:/home/docker/cp-test_multinode-250383-m03_multinode-250383-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 ssh -n multinode-250383-m02 "sudo cat /home/docker/cp-test_multinode-250383-m03_multinode-250383-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-250383 node stop m03: (1.366682301s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-250383 status: exit status 7 (403.514404ms)

                                                
                                                
-- stdout --
	multinode-250383
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-250383-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-250383-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-250383 status --alsologtostderr: exit status 7 (402.650026ms)

                                                
                                                
-- stdout --
	multinode-250383
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-250383-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-250383-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:14:45.002231   40594 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:14:45.002486   40594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:14:45.002496   40594 out.go:304] Setting ErrFile to fd 2...
	I0802 18:14:45.002502   40594 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:14:45.002701   40594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:14:45.002887   40594 out.go:298] Setting JSON to false
	I0802 18:14:45.002914   40594 mustload.go:65] Loading cluster: multinode-250383
	I0802 18:14:45.003009   40594 notify.go:220] Checking for updates...
	I0802 18:14:45.003361   40594 config.go:182] Loaded profile config "multinode-250383": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:14:45.003383   40594 status.go:255] checking status of multinode-250383 ...
	I0802 18:14:45.003793   40594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:14:45.003871   40594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:14:45.024064   40594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0802 18:14:45.024483   40594 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:14:45.025053   40594 main.go:141] libmachine: Using API Version  1
	I0802 18:14:45.025073   40594 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:14:45.025449   40594 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:14:45.025680   40594 main.go:141] libmachine: (multinode-250383) Calling .GetState
	I0802 18:14:45.027096   40594 status.go:330] multinode-250383 host status = "Running" (err=<nil>)
	I0802 18:14:45.027137   40594 host.go:66] Checking if "multinode-250383" exists ...
	I0802 18:14:45.027439   40594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:14:45.027472   40594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:14:45.042376   40594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34777
	I0802 18:14:45.042694   40594 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:14:45.043135   40594 main.go:141] libmachine: Using API Version  1
	I0802 18:14:45.043161   40594 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:14:45.043511   40594 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:14:45.043680   40594 main.go:141] libmachine: (multinode-250383) Calling .GetIP
	I0802 18:14:45.046235   40594 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:14:45.046668   40594 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:14:45.046704   40594 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:14:45.046854   40594 host.go:66] Checking if "multinode-250383" exists ...
	I0802 18:14:45.047191   40594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:14:45.047254   40594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:14:45.062272   40594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0802 18:14:45.062757   40594 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:14:45.063218   40594 main.go:141] libmachine: Using API Version  1
	I0802 18:14:45.063238   40594 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:14:45.063558   40594 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:14:45.063733   40594 main.go:141] libmachine: (multinode-250383) Calling .DriverName
	I0802 18:14:45.063933   40594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 18:14:45.063959   40594 main.go:141] libmachine: (multinode-250383) Calling .GetSSHHostname
	I0802 18:14:45.066376   40594 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:14:45.066740   40594 main.go:141] libmachine: (multinode-250383) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:07:47", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:11:54 +0000 UTC Type:0 Mac:52:54:00:bf:07:47 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-250383 Clientid:01:52:54:00:bf:07:47}
	I0802 18:14:45.066772   40594 main.go:141] libmachine: (multinode-250383) DBG | domain multinode-250383 has defined IP address 192.168.39.67 and MAC address 52:54:00:bf:07:47 in network mk-multinode-250383
	I0802 18:14:45.066870   40594 main.go:141] libmachine: (multinode-250383) Calling .GetSSHPort
	I0802 18:14:45.067060   40594 main.go:141] libmachine: (multinode-250383) Calling .GetSSHKeyPath
	I0802 18:14:45.067234   40594 main.go:141] libmachine: (multinode-250383) Calling .GetSSHUsername
	I0802 18:14:45.067384   40594 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/multinode-250383/id_rsa Username:docker}
	I0802 18:14:45.141745   40594 ssh_runner.go:195] Run: systemctl --version
	I0802 18:14:45.147901   40594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:14:45.162691   40594 kubeconfig.go:125] found "multinode-250383" server: "https://192.168.39.67:8443"
	I0802 18:14:45.162718   40594 api_server.go:166] Checking apiserver status ...
	I0802 18:14:45.162757   40594 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0802 18:14:45.175947   40594 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	W0802 18:14:45.185058   40594 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0802 18:14:45.185097   40594 ssh_runner.go:195] Run: ls
	I0802 18:14:45.189275   40594 api_server.go:253] Checking apiserver healthz at https://192.168.39.67:8443/healthz ...
	I0802 18:14:45.193319   40594 api_server.go:279] https://192.168.39.67:8443/healthz returned 200:
	ok
	I0802 18:14:45.193340   40594 status.go:422] multinode-250383 apiserver status = Running (err=<nil>)
	I0802 18:14:45.193350   40594 status.go:257] multinode-250383 status: &{Name:multinode-250383 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0802 18:14:45.193366   40594 status.go:255] checking status of multinode-250383-m02 ...
	I0802 18:14:45.193683   40594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:14:45.193715   40594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:14:45.211384   40594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40775
	I0802 18:14:45.211879   40594 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:14:45.212362   40594 main.go:141] libmachine: Using API Version  1
	I0802 18:14:45.212389   40594 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:14:45.212738   40594 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:14:45.212925   40594 main.go:141] libmachine: (multinode-250383-m02) Calling .GetState
	I0802 18:14:45.214357   40594 status.go:330] multinode-250383-m02 host status = "Running" (err=<nil>)
	I0802 18:14:45.214374   40594 host.go:66] Checking if "multinode-250383-m02" exists ...
	I0802 18:14:45.214683   40594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:14:45.214717   40594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:14:45.229393   40594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I0802 18:14:45.229805   40594 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:14:45.230259   40594 main.go:141] libmachine: Using API Version  1
	I0802 18:14:45.230284   40594 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:14:45.230710   40594 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:14:45.231005   40594 main.go:141] libmachine: (multinode-250383-m02) Calling .GetIP
	I0802 18:14:45.233671   40594 main.go:141] libmachine: (multinode-250383-m02) DBG | domain multinode-250383-m02 has defined MAC address 52:54:00:a8:30:da in network mk-multinode-250383
	I0802 18:14:45.234080   40594 main.go:141] libmachine: (multinode-250383-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:30:da", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:13:04 +0000 UTC Type:0 Mac:52:54:00:a8:30:da Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-250383-m02 Clientid:01:52:54:00:a8:30:da}
	I0802 18:14:45.234103   40594 main.go:141] libmachine: (multinode-250383-m02) DBG | domain multinode-250383-m02 has defined IP address 192.168.39.114 and MAC address 52:54:00:a8:30:da in network mk-multinode-250383
	I0802 18:14:45.234261   40594 host.go:66] Checking if "multinode-250383-m02" exists ...
	I0802 18:14:45.234557   40594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:14:45.234592   40594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:14:45.249131   40594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41965
	I0802 18:14:45.249545   40594 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:14:45.249966   40594 main.go:141] libmachine: Using API Version  1
	I0802 18:14:45.249991   40594 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:14:45.250291   40594 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:14:45.250451   40594 main.go:141] libmachine: (multinode-250383-m02) Calling .DriverName
	I0802 18:14:45.250624   40594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0802 18:14:45.250646   40594 main.go:141] libmachine: (multinode-250383-m02) Calling .GetSSHHostname
	I0802 18:14:45.253314   40594 main.go:141] libmachine: (multinode-250383-m02) DBG | domain multinode-250383-m02 has defined MAC address 52:54:00:a8:30:da in network mk-multinode-250383
	I0802 18:14:45.253746   40594 main.go:141] libmachine: (multinode-250383-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:30:da", ip: ""} in network mk-multinode-250383: {Iface:virbr1 ExpiryTime:2024-08-02 19:13:04 +0000 UTC Type:0 Mac:52:54:00:a8:30:da Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-250383-m02 Clientid:01:52:54:00:a8:30:da}
	I0802 18:14:45.253777   40594 main.go:141] libmachine: (multinode-250383-m02) DBG | domain multinode-250383-m02 has defined IP address 192.168.39.114 and MAC address 52:54:00:a8:30:da in network mk-multinode-250383
	I0802 18:14:45.253925   40594 main.go:141] libmachine: (multinode-250383-m02) Calling .GetSSHPort
	I0802 18:14:45.254087   40594 main.go:141] libmachine: (multinode-250383-m02) Calling .GetSSHKeyPath
	I0802 18:14:45.254278   40594 main.go:141] libmachine: (multinode-250383-m02) Calling .GetSSHUsername
	I0802 18:14:45.254394   40594 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-5397/.minikube/machines/multinode-250383-m02/id_rsa Username:docker}
	I0802 18:14:45.333828   40594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0802 18:14:45.346896   40594 status.go:257] multinode-250383-m02 status: &{Name:multinode-250383-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0802 18:14:45.346933   40594 status.go:255] checking status of multinode-250383-m03 ...
	I0802 18:14:45.347310   40594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0802 18:14:45.347354   40594 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0802 18:14:45.362481   40594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I0802 18:14:45.362925   40594 main.go:141] libmachine: () Calling .GetVersion
	I0802 18:14:45.363465   40594 main.go:141] libmachine: Using API Version  1
	I0802 18:14:45.363490   40594 main.go:141] libmachine: () Calling .SetConfigRaw
	I0802 18:14:45.363769   40594 main.go:141] libmachine: () Calling .GetMachineName
	I0802 18:14:45.363957   40594 main.go:141] libmachine: (multinode-250383-m03) Calling .GetState
	I0802 18:14:45.365447   40594 status.go:330] multinode-250383-m03 host status = "Stopped" (err=<nil>)
	I0802 18:14:45.365463   40594 status.go:343] host is not running, skipping remaining checks
	I0802 18:14:45.365470   40594 status.go:257] multinode-250383-m03 status: &{Name:multinode-250383-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 node start m03 -v=7 --alsologtostderr
E0802 18:15:14.261600   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-250383 node start m03 -v=7 --alsologtostderr: (38.336381718s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-250383 node delete m03: (1.653059062s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (182.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-250383 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0802 18:25:14.261239   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-250383 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.895848305s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-250383 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (182.40s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-250383
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-250383-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-250383-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (58.421293ms)

                                                
                                                
-- stdout --
	* [multinode-250383-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-250383-m02' is duplicated with machine name 'multinode-250383-m02' in profile 'multinode-250383'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-250383-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-250383-m03 --driver=kvm2  --container-runtime=crio: (41.006656349s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-250383
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-250383: exit status 80 (210.918787ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-250383 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-250383-m03 already exists in multinode-250383-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-250383-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.12s)

                                                
                                    
x
+
TestScheduledStopUnix (110.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-537872 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-537872 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.43645714s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-537872 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-537872 -n scheduled-stop-537872
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-537872 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-537872 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-537872 -n scheduled-stop-537872
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-537872
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-537872 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-537872
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-537872: exit status 7 (64.420616ms)

                                                
                                                
-- stdout --
	scheduled-stop-537872
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-537872 -n scheduled-stop-537872
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-537872 -n scheduled-stop-537872: exit status 7 (58.838595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-537872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-537872
--- PASS: TestScheduledStopUnix (110.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (184.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3602212580 start -p running-upgrade-079131 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0802 18:32:43.927670   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3602212580 start -p running-upgrade-079131 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m53.882423351s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-079131 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-079131 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.797794255s)
helpers_test.go:175: Cleaning up "running-upgrade-079131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-079131
--- PASS: TestRunningBinaryUpgrade (184.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-891799 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-891799 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (72.54601ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-891799] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (88.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-891799 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-891799 --driver=kvm2  --container-runtime=crio: (1m27.977803974s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-891799 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (88.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-891799 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-891799 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.002797202s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-891799 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-891799 status -o json: exit status 2 (241.223074ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-891799","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-891799
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (149.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1091608640 start -p stopped-upgrade-837935 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1091608640 start -p stopped-upgrade-837935 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m7.014262096s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1091608640 -p stopped-upgrade-837935 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1091608640 -p stopped-upgrade-837935 stop: (1.403840546s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-837935 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-837935 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m20.838162206s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (149.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-891799 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-891799 --no-kubernetes --driver=kvm2  --container-runtime=crio: (47.054277108s)
--- PASS: TestNoKubernetes/serial/Start (47.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-891799 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-891799 "sudo systemctl is-active --quiet service kubelet": exit status 1 (185.832391ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (9.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E0802 18:35:14.261477   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (8.846036438s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (9.81s)

                                                
                                    
x
+
TestPause/serial/Start (57.6s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-455569 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-455569 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (57.601030197s)
--- PASS: TestPause/serial/Start (57.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-891799
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-891799: (1.484959177s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-891799 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-891799 --driver=kvm2  --container-runtime=crio: (44.460763767s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-891799 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-891799 "sudo systemctl is-active --quiet service kubelet": exit status 1 (197.294534ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-837935
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-800809 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-800809 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (100.494872ms)

                                                
                                                
-- stdout --
	* [false-800809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0802 18:37:40.287404   53430 out.go:291] Setting OutFile to fd 1 ...
	I0802 18:37:40.287655   53430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:37:40.287664   53430 out.go:304] Setting ErrFile to fd 2...
	I0802 18:37:40.287668   53430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0802 18:37:40.287859   53430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-5397/.minikube/bin
	I0802 18:37:40.288414   53430 out.go:298] Setting JSON to false
	I0802 18:37:40.289298   53430 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4804,"bootTime":1722619056,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0802 18:37:40.289356   53430 start.go:139] virtualization: kvm guest
	I0802 18:37:40.291345   53430 out.go:177] * [false-800809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0802 18:37:40.292915   53430 out.go:177]   - MINIKUBE_LOCATION=19355
	I0802 18:37:40.292970   53430 notify.go:220] Checking for updates...
	I0802 18:37:40.295333   53430 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0802 18:37:40.296562   53430 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-5397/kubeconfig
	I0802 18:37:40.297723   53430 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-5397/.minikube
	I0802 18:37:40.298858   53430 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0802 18:37:40.300108   53430 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0802 18:37:40.301746   53430 config.go:182] Loaded profile config "cert-expiration-139745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:37:40.301899   53430 config.go:182] Loaded profile config "cert-options-643429": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0802 18:37:40.302036   53430 config.go:182] Loaded profile config "kubernetes-upgrade-132946": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0802 18:37:40.302141   53430 driver.go:392] Setting default libvirt URI to qemu:///system
	I0802 18:37:40.337481   53430 out.go:177] * Using the kvm2 driver based on user configuration
	I0802 18:37:40.338614   53430 start.go:297] selected driver: kvm2
	I0802 18:37:40.338627   53430 start.go:901] validating driver "kvm2" against <nil>
	I0802 18:37:40.338638   53430 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0802 18:37:40.340429   53430 out.go:177] 
	W0802 18:37:40.341640   53430 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0802 18:37:40.342752   53430 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-800809 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-800809" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-800809" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 02 Aug 2024 18:37:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.201:8443
name: cert-expiration-139745
contexts:
- context:
cluster: cert-expiration-139745
extensions:
- extension:
last-update: Fri, 02 Aug 2024 18:37:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-139745
name: cert-expiration-139745
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-139745
user:
client-certificate: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.crt
client-key: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-800809

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-800809"

                                                
                                                
----------------------- debugLogs end: false-800809 [took: 2.631841767s] --------------------------------
helpers_test.go:175: Cleaning up "false-800809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-800809
--- PASS: TestNetworkPlugins/group/false (2.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (165.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-407306 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-407306 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (2m45.437836634s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (165.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-504903 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-504903 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m34.038600014s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (94.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-407306 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [adebf309-5b22-49f5-8390-a2d275d4ad50] Pending
helpers_test.go:344: "busybox" [adebf309-5b22-49f5-8390-a2d275d4ad50] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [adebf309-5b22-49f5-8390-a2d275d4ad50] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003793443s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-407306 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-407306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-407306 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-504903 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e6464f4d-f98c-4dfd-95d1-f5db6f710d13] Pending
helpers_test.go:344: "busybox" [e6464f4d-f98c-4dfd-95d1-f5db6f710d13] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e6464f4d-f98c-4dfd-95d1-f5db6f710d13] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00353928s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-504903 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-504903 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-504903 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-490984 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-490984 --alsologtostderr -v=3: (1.306105782s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-490984 -n old-k8s-version-490984: exit status 7 (68.668338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-490984 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (537.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-504903 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-504903 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (8m57.091226151s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504903 -n default-k8s-diff-port-504903
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (537.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (289.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-198962 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
E0802 18:45:14.262055   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 18:46:37.305773   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
E0802 18:47:43.928229   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-198962 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (4m49.00858964s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (289.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-198962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-198962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.043378702s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-198962 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-198962 --alsologtostderr -v=3: (10.506857655s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-198962 -n newest-cni-198962
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-198962 -n newest-cni-198962: exit status 7 (72.927025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-198962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-198962 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-198962 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-rc.0: (36.730420459s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-198962 -n newest-cni-198962
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-198962 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-198962 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-198962 -n newest-cni-198962
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-198962 -n newest-cni-198962: exit status 2 (240.180632ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-198962 -n newest-cni-198962
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-198962 -n newest-cni-198962: exit status 2 (247.104417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-198962 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-198962 --alsologtostderr -v=1: (1.006208643s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-198962 -n newest-cni-198962
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-198962 -n newest-cni-198962
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-757654 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-757654 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m35.528345065s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-757654 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [14462fa7-ad50-439f-b828-3450362277a6] Pending
helpers_test.go:344: "busybox" [14462fa7-ad50-439f-b828-3450362277a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [14462fa7-ad50-439f-b828-3450362277a6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003169299s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-757654 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-757654 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-757654 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (625.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-757654 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-757654 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (10m24.956899507s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-757654 -n embed-certs-757654
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (625.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m1.574295572s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m29.743311941s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-800809 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-800809 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tpdnl" [107eec25-740a-4541-ad73-fcade73a823a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tpdnl" [107eec25-740a-4541-ad73-fcade73a823a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003938989s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-800809 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m25.010304576s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-78745" [eef5ba2c-6708-40a3-806b-0a85cc54a18b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005734567s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-800809 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-800809 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-khhhx" [98acfcc6-954c-4b2c-bb2a-a5dd0989c20d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-khhhx" [98acfcc6-954c-4b2c-bb2a-a5dd0989c20d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.131016927s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-800809 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (75.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0802 19:10:14.261161   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/addons-892214/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m15.395647167s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (75.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m1.749631339s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hl5lj" [0ab9ff7e-2243-4daa-a58c-53dc2c78b11b] Running
E0802 19:10:46.976817   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006394817s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-800809 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-800809 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jwlrk" [5109eefe-74e0-410e-ba4c-93eff5b5412c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jwlrk" [5109eefe-74e0-410e-ba4c-93eff5b5412c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004335606s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-800809 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m28.685678904s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-800809 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-800809 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jtdhl" [b5a885df-fd47-42ee-8bfd-c9c5a683d9a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jtdhl" [b5a885df-fd47-42ee-8bfd-c9c5a683d9a4] Running
E0802 19:11:29.539069   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/no-preload-407306/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004221179s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-800809 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-800809 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-800809 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-b4c2t" [a2998a64-1bec-4571-8c48-c5968d874349] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-b4c2t" [a2998a64-1bec-4571-8c48-c5968d874349] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005262469s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-800809 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m41.770741695s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-800809 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vdgxw" [567adfbe-7252-4a09-a710-1b2211c817f3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00504356s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-800809 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-800809 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lgpbd" [724fd65e-0627-4f4b-92d3-f25338cf0947] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lgpbd" [724fd65e-0627-4f4b-92d3-f25338cf0947] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004389296s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-800809 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-800809 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-800809 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-f7ptk" [2ddf446b-9cb0-446f-8008-f012bfe7a649] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-f7ptk" [2ddf446b-9cb0-446f-8008-f012bfe7a649] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003724197s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-800809 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-800809 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)
E0802 19:13:59.468139   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/client.crt: no such file or directory
E0802 19:14:04.589336   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/client.crt: no such file or directory
E0802 19:14:14.829537   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/auto-800809/client.crt: no such file or directory

                                                
                                    

Test skip (40/322)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
273 TestStartStop/group/disable-driver-mounts 0.14
284 TestNetworkPlugins/group/kubenet 2.9
292 TestNetworkPlugins/group/cilium 3.17
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-684611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-684611
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-800809 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-800809" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-800809" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 02 Aug 2024 18:37:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.201:8443
name: cert-expiration-139745
contexts:
- context:
cluster: cert-expiration-139745
extensions:
- extension:
last-update: Fri, 02 Aug 2024 18:37:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-139745
name: cert-expiration-139745
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-139745
user:
client-certificate: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.crt
client-key: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-800809

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-800809"

                                                
                                                
----------------------- debugLogs end: kubenet-800809 [took: 2.761362986s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-800809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-800809
--- SKIP: TestNetworkPlugins/group/kubenet (2.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0802 18:37:43.927748   12547 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/functional-096349/client.crt: no such file or directory
panic.go:626: 
----------------------- debugLogs start: cilium-800809 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-800809" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19355-5397/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 02 Aug 2024 18:37:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.61.201:8443
name: cert-expiration-139745
contexts:
- context:
cluster: cert-expiration-139745
extensions:
- extension:
last-update: Fri, 02 Aug 2024 18:37:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-139745
name: cert-expiration-139745
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-139745
user:
client-certificate: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.crt
client-key: /home/jenkins/minikube-integration/19355-5397/.minikube/profiles/cert-expiration-139745/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-800809

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-800809" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-800809"

                                                
                                                
----------------------- debugLogs end: cilium-800809 [took: 3.023237525s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-800809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-800809
--- SKIP: TestNetworkPlugins/group/cilium (3.17s)

                                                
                                    
Copied to clipboard